Memory capacity and memory block in Fast Search for SharePoint 2010 RRS feed

  • Question

  • Hello,

    The fast search farm was used to crawling items very fast, until yesterday, I investigated the ULS log, found it was flooded with below message:

    At memory capacity. Load is 83%, configured to block at 80%. have been waiting 28:27 to queue this document  [documentmanager.cpp:969]  d:\office\source\search\native\gather\plugins\contentpi\documentmanager.cpp


    Memory usage of the server is always around 80%, and I don't want the fast search process being blocked by memory capacity.

    One thing I did to the server was to set a limitation on the memory usage of the SQL server, and I have reverted it back, but issue persists.

    Please let me know if there is a setting that I can set the block or guide me on other possible causese.

     Best Regards,

    Saturday, December 31, 2011 8:29 AM

All replies

  • Hello,


    We have the same issue here, did you resolved this problem ?


    Best Regards,

    Wednesday, January 11, 2012 10:35 AM
  • You'll want to check whether there is a bottleneck on FAST side.


    Per the below article, find "Batches Ready" performance counter on the server(s) hosting the Content SSA crawl components, under "OSS Search FAST Content Plugin". If that number keeps consistently growing during your crawl, it would probably indicate a bottleneck on FAST side, as FAST back-end cannot process submitted items quickly enough.

    You may also want to enable additional debugging on FAST Connector, in addition to ULS logs:


    Createa string value called ContentAPILogFile under the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office Server\14.0\Search\Global\Gathering Manager that contains the full path and the file name to use for logging.

    Restart the mssearch.exe process by restarting the osearch14 service.



    Also, is it possible that your content may have changed and you have started crawling some very large files?

    Igor Veytskin
    Monday, January 16, 2012 12:18 PM
  • Ok, we are currently resetting the index, afterwars we start the full crawl again.

    Our content is growth from 0 to 80 GB during an import job, when import job was finished, we started the full crawl.


    What are large files :-) we have many large files > 100 MB about 15 of them are > 450 but < 650 MB.



    Monday, January 16, 2012 3:51 PM
  • Stephan,


    The below thread will give you some good information on the limitations of crawling and processing large files:



    If you left "MaxDownloadSize" and "MaxIndexSize" at default values of 64MB and 16MB respectively, any files larger than 64MB on Sharepoint side would be skipped by the crawler.  If you have increased that, you are still lookng at 16MB as a max default number indexed on FAST side for a managed property.

    But let's say you have decided to increase everything and let files get processed.  Whenever you are processing very large files, you are looking at increased resource usage of FAST Document Processors(especially memory).  Some files may cause DocProc's to go over the 2GB memory limit and cause DocProc's to terminate and restart.  If that happens often, you may be looking at a performance bottleneck on FAST side.  It's a good idea to review FAST logs in %FASTSEARCH\var\log\* directory.



    Igor Veytskin
    Tuesday, January 17, 2012 1:20 AM
  • Hi Igor,

    We did't change the default values. The FAST Server doesn't eating up RAM, it is only on the SharePoint Server.

    We found an Performance Count "Average dispatch time - ms" which is >32'000 (which means 32ms ?)

    We have a proxy sitting between the FAST and the SharePoint server....



    Tuesday, January 17, 2012 9:25 PM
  • Update on this issue, i have another forum entry focused in this problem:


    For better focusing, i will only post updates in this log.


    We have configured the FAST Search Server with the settings from: we ran another full crawl, which cuts down the crawl duration from 22h to 6h , ok, still not fast enough for 80 GB content..


    What we don't know is: is this getting faster because we allready ran a full crawl on this content source...



    Wednesday, January 18, 2012 8:24 AM