none
Document Processors are not coming up and are in DEAD state RRS feed

  • Question

  • I am using FAST ESP 5.3 Server. Environment was working fine and when I tried to remove some collection (using collection-admin), it started to throw the below error with Document Processors on non-admin node and going into dead state.

    RegisterCapabilities failed: ProcessorDeploymentException: For pipeline '<<Pipeline>> (webcluster)', creating processor URLProcessor failed: ImportError: rocessors.Crawler

    Failed to execute 'import processors.Crawler': ImportError: No module named Crawler  

     The "Crawler.xml" file is present into "FASTSRCH\etc\processor" folder in Admin and non-admin nodes and are same. When I removed the URLProcessor stage from this above pipeline, it started to throw the same error into some other pipeline.

    When I am tried to run "procserver.exe" utility on admin node under "FASTSRCH\bin", it is running successfully and displaying huge configuration details without any error. While executing this command on non-admin node, getting bwlow error

    [2013-06-18 16:13:33.536] INFO       systemmsg Started Processor Server 5.3.4.67 (PID: 7626)
    [2013-06-18 16:13:34.081] ERROR      systemmsg Failed to execute 'import processors.Crawler': ImportError: No module named Crawler
    [2013-06-18 16:13:34.081] ERROR      systemmsg RegisterCapabilities failed: ProcessorDeploymentException: For pipeline '<<pipeline name>> (webcluster)', creating processor URLProcessor failed: ImportError: processors.Crawler
    [2013-06-18 16:13:34.081] INFO       systemmsg Terminating voluntarily.

    If I am adding the new procservers on non-admin node after removing the old one(s), it also goes into DEAD state with same error and not coming up in "running" stage.

    Please let me know if anyone has faced this kind of scenario earlier?


    Ashish Gupta Click "Vote As Helpful"! if you think that post is helpful in responding your question click "Mark As Answer, if you think that this is your answer for your question.

    Tuesday, June 18, 2013 9:28 PM

Answers

All replies

  • Ashish,
    If even when adding docproc to a different node you get the same error, something must have gone wrong besides using the 'collection-admin' command.
    I would suggest if you have another working environment to do a diff of etc\processor and lib directories (or you can copy these from the working env. to see if it works)

    Wednesday, June 19, 2013 3:30 PM
  • When I compared the lib folder, I found that "crawler.pyc" was missing on non-admin node.

    After copying the crawler.pyc on non-admin node, it started running.


    Ashish Gupta Click "Vote As Helpful"! if you think that post is helpful in responding your question click "Mark As Answer, if you think that this is your answer for your question.

    Tuesday, June 25, 2013 2:19 PM
  • Thanks Ashish for the update and glad the issue was resolved.
    Tuesday, June 25, 2013 8:35 PM