none
Loss of crawleradmin utility in a Ubermaster/Master configuration RRS feed

  • Question

  • I have a 4 server fault tolerant FSIS farm with 2 search rows and 2 index columns. I have an ubermaster and master set up. For reference, here's a crude server matrix:

    Ubermaster
    Col 0 Master:
    C0R0

    Col 1 Master:
    C1R0
    C0R1 C1R1

    I noticed that as soon as i configured fault tolerance for the index columns, I could not longer run the crawleradmin utility. I receive a 10061 socket error. I understand that the Ubermaster and Master processes are running the crawl instead of the conventional Crawler process.

    Anyone else getting this error with such a configuration? Should I use another utility instead?

    Tuesday, February 8, 2011 7:01 PM

Answers

  • I found my issue. It looks like you have to explicitly call the index server in a distributed set up. I left out the -C flag. Thanks for jamalcom for the assist!

    Here are the commands for crawleradmin (notice the -C flag):

    crawleradmin: option [option ..]

    General Options

    -C hostname[:port] Connect to hostname instead of local host (--crawlernode)
    -o <configdir>     Do offline mode (--offline assumes default configdir;
                       'd:\esp\data\crawler\config'). Only applicable with; -a,-d,-c,-q,
                       -G,-f,-d, --getdata and --verifyuri
    -l <loglevel>      32bit hexadecimal loglevel or one of:
                       debug, error, info, verbose, warning,
                      
    -h                 This information (--help)

    • Marked as answer by ron_jones Wednesday, March 23, 2011 5:28 PM
    Wednesday, March 23, 2011 5:27 PM

All replies

  • Hi Ron,

    Regarding your node configuration, are all the nodes configured with fully qualified domain names (FQDN)?  Can you perform an nslookup from each node to itself and the other three nodes's FQDNs?   Can you perform a reverse nslookup to each IP address?

    Example:
    Forward nslookup: nslookup hostname.domain.com - should resolve the FQDN hostname.domain.com to the IP address

    Reverse nslookup: nslookup 192.168.0.100 - should resolve the IP address to the FQDN hostname.domain.com.

    If the nodes cannot perform a forward and reverse nslookup to eachother, this can indicate there is a network communication problem. 

    Could there be anything blocking the ports between the nodes, such as Windows Firewall, or antivirus?  If so, I would recommend unblocking the ports, or as a test, disabling the antivuris/firewall and see if the crawleradmin command will work.

    I assume the 10061 error you recieved, looks similar to the below:

     [2006-01-12 14:39:20] WARNING ConfigServer [hostname] 16005 systemmsg ReRegister() to module at [hostname]:15674 raised exception - socket.error: [Errno 10061] Connection refused. 

    Is the [hostname] in your error using the short hostname, or the FQDN?  Are any of your modules are using the short server name (example:c0r0) while other are using the fully qualified hostname (example: c0r0.domain.com)? Are there any resolution failures in the logs?  If so, I would recommend configuring the nodes to use FQDN, and not short names. 

    Thanks!

    Rob Vazzana | Microsoft | Enterprise Search Group | Senior Support Engineer | http://www.microsoft.com/enterprisesearch

    Monday, February 14, 2011 5:51 PM
    Moderator
  • Thanks for the input. Here's what I have:

    • Yes, all the nodes have FQDN's.
    • NSLOOKUP works just fine.
    • All the necessary ports are open.

    I will double check to ensure there are only FQDN being used. Thanks for the heads up!

    Tuesday, February 15, 2011 7:35 PM
  • I found my issue. It looks like you have to explicitly call the index server in a distributed set up. I left out the -C flag. Thanks for jamalcom for the assist!

    Here are the commands for crawleradmin (notice the -C flag):

    crawleradmin: option [option ..]

    General Options

    -C hostname[:port] Connect to hostname instead of local host (--crawlernode)
    -o <configdir>     Do offline mode (--offline assumes default configdir;
                       'd:\esp\data\crawler\config'). Only applicable with; -a,-d,-c,-q,
                       -G,-f,-d, --getdata and --verifyuri
    -l <loglevel>      32bit hexadecimal loglevel or one of:
                       debug, error, info, verbose, warning,
                      
    -h                 This information (--help)

    • Marked as answer by ron_jones Wednesday, March 23, 2011 5:28 PM
    Wednesday, March 23, 2011 5:27 PM