none
increasing the number of partitons RRS feed

  • Question

  • Hi All

    I have a stable fast4sp install with content indexed and delivering search results..all happy...

    Now I would like to increase the number of partitions from the deafult 5 to the maximum of 10. I have a single user enviroment and according to  http://technet.microsoft.com/en-us/library/gg482016  it requires a full crawl after a reset of both from the sharepoint side and the fast sevrer side. I do not want to fo a full crawl as it will take the system offline for way to long (1 month due to number of documents)

    I find this odd as I have a complete set of fixml files on my fast server and I am not crawling new content.

    I am wondering does anyone have any thoughts on how to do this without triggerring a full recrawl?

    Marco

    Thursday, December 6, 2012 1:48 PM

All replies

  • Hi,

    You can use the procedure outlined in http://www.microsoft.com/en-us/download/details.aspx?id=28548 which is for adding/removing columns and using fixmlfeeder to refeed the fixml.

    That should work equally well for increasing the number of partitions. As you are probably moving from around ~15 million to more, keep in mind that performance may degrade as you add more and more data. Which may or may not be an issue depending on your hardware, item sizes and search patterns. Just thought I'd mention it :)

    Thanks,
    Mikael Svenson


    Search Enthusiast - SharePoint MVP/MCT/MCPD - If you find an answer useful, please up-vote it.
    http://techmikael.blogspot.com/
    Author of Working with FAST Search Server 2010 for SharePoint

    Friday, December 7, 2012 7:50 PM
  • Hi,

    We tried the fixmlfeeder procedure to increase the number of columns in our system, and I won't recommend it at all. Maybe we did something wrong, but we followed the whole procedure.

    At the beginning, everything was ok for us (200 docs per second). So we left ir running. Next day it was still ok (60 docs per sec), but obviously it was not the same. But at some point the performance was awful (2 docs per second or less). We started doing some research, and we saw that everytime an error happened with a fixml file, the process began walking through all already sent files (not resending them, but doing something) before it resumed on the point the error was produced. So, we reached a point where we had to wait for two hours before it started sending files again after an error arose. So imagine an error in one of the last files...

    With that scenario on minf, we decided to stop the process and recrawl everything. In six days it was done (20 million docs more less). I strongly believe that if we kept going with the fixml method, It would have taken 3 weeks or more.

    Cheers, Sergio.

    Monday, December 10, 2012 4:21 PM