none
How to Link JDBC connector to custom pipeline stages RRS feed

  • Question

  • Hi,

    I am trying to crawl an SQL Server table having ~20K rows, using JDBC Connector for FAST, which I am successfully able to do. But the problem is that we have some custom pipeline stages which needs to be processed for each SQL Table row crawled. The custom pipeline stage should extract the crawled prooperties values that come after crawling each row. How should I configure the pipeline extensibility so that I get crawled properties for each SQL Server Table row crawled in the subsequent pipeline stages.

    Thanks


    Regards Sagar Pattnayak

    Tuesday, March 19, 2013 1:51 PM

All replies

  • Hi,

    Where exactly are you stuck? If you need the crawled property names then you can find them under FAST Query SSA - Crawled Property - JDBC Category. That way you can get which ever field that you might want to customize.


    Freddie Maize ..A story with Glory is History. Doesn’t matter whether Glory rest in the world of Demon or God. Lets create History..

    Thursday, March 21, 2013 12:40 PM
  • Hi Freddie

    Thanks for your reply. Now I am getting all my crawled properties in custom piple line stage. But now the problem is that, we have one column in the table which stores file contents in binary format. I am not able to crawl that. The FAST crawler always fails with the exception

    Customer-supplied command failed: Active process limit exceeded
     \\ Process terminated abnormally: Unknown error (0xe0434f4d)
     \\  \\ Unhandled Exception: System.ArgumentException: '', hexadecimal value 0x13, is an invalid character.
     \\    at System.Xml.XmlUtf8RawTextWriter.InvalidXmlChar(Int32 ch, Byte* pDst, Boolean entitize)

    I had a look on the following link

    http://social.technet.microsoft.com/Forums/en-US/fastsharepoint/thread/c9f6bbc1-31d5-450a-9b9d-3d6f48fe8611/

    which says we can crawl binary data which is returned by a field named data. But i did not get such field in my fixml. I only get the size/length of the binary data.


    Regards Sagar Pattnayak

    Thursday, March 21, 2013 2:31 PM
  • Hi,

    Add the SPY stage to you pipeline and check the spy.txt file. You will see the Attribute:data field which has all your file content. Also make sure the binary filed in your JDBC config file is mapped to data.

    However I'm still not clear what you are trying to achieve, to give you an perfect reply. To access the file content you could say something like select column1, column1, fileblobcolumn as body from table in your query. Get the fileblobcolumn crawledproperty in your pipeline.


    Freddie Maize ..A story with Glory is History. Doesn’t matter whether Glory rest in the world of Demon or God. Lets create History..

    Thursday, March 28, 2013 9:25 AM
  • Posting the pipelineextensibility.xml file content and the code snippet here might help you as well.

    Freddie Maize ..A story with Glory is History. Doesn’t matter whether Glory rest in the world of Demon or God. Lets create History..

    Thursday, March 28, 2013 9:26 AM