none
Disks running out of space according to SharePoint 2010

    Question

  • I have a SBS2011 running SharePoint Foundation 2010. The configuration contains 4 disks, C: with system, D: with Exchange data, E: with SQL data, and F: with user data.

    Control panel System / Advanced System Settings / Advanced / Performance Options / Advanced / Virtual Memory has "Automatically manage paging filesize for all drives" checked, and in the dimmed window it shows "C: System managed, D: None, E: None, F: None". So to me it seems like there shall not be any paging file on F:

    But I'm still getting a warning from SharePoint that "Drives are at risk of running out of free space", and if I click on the warning then I see that it's pointing to F:.

    Is seems ridiculous to have 80GB free space on all disks just to eliminate this warning. Is there some way to tell SharePoint where the swapfile / dumpfile shall go so that I can just alllocate this space on one disk ?


    MWebjorn

    Wednesday, July 15, 2015 5:51 PM

Answers

  • I agree with Trevor: this isn't a significant rule violation to worry about, and disabling the rule will not cause problems.  Some additional comments.

    • Since it's virtualized, check the server's memory configuration in your virtual manager.  your virtualization may have given it more memory than you originally configured.
    • Virtual server memory can be configured as static and dynamic.  If dynamic, it will fluctuate over time, creeping up.  My farms reside on HyperV 2012.  For example, the settings for a farm application server are: Startup: 12288; minimum: 12288; maximum: 32768 MB.  Compare this with what the server actually sees:.
    • This server actually see 28.9 GB memory at present.  The server has a 160 GB system drive, of which 86.1 GB is free.  This farm has a single application server, on which all services are instantiated and running.  As expected, the "Drives are at risk of running out of free space" rule violation is appearing in farm resports.  This will go away at the next patching operation, which will involve restarting the server and which resets VM memory back to the starting point of 12 GB.  It will slowly start to creep up again, and as the next patching window draws near, I'll see this rule violation appear again. 
    • Because it's insignifcant, and goes away eventually, I haven't given it much attention.  I am however, researching optimal memory configurations for all farm servers and intend to switch memory configuration from dynamic to static. Static memory configuration is recommended by Microsoft for those SharePoint farm servers also hosting AppFabric (see reference below).  Also, I doubled system disk size given my observations over time of how application server disk usage increased as I added services.  I recommend minimum 160 GB system disk for all mid-size farms.

    References

    Thursday, July 16, 2015 3:05 PM

All replies

  • The rule executes as all or nothing.  It checks each drive per server, and, if one is found deficient, the rule sets.  Some questions:

    1. What is the total/used disk space on C?
    2. Is the server real or virtual?
    3. What is the RAM on the server?
    Wednesday, July 15, 2015 6:59 PM
  • Not that im aware of.

    This has always annoyed me about this alert in SharePoint. How about letting us set the limit? Or like you said seting the limit per drive, since their reasoning doesnt even make sense for drives with no page file.

    I usually just turn this health check off and rely on the "disks are running out of space one" or some other mechanism to alert me when disks are low.

    Wednesday, July 15, 2015 7:09 PM
  • The health analyzer rule is looking for 5*Physical RAM of free space on C:.

    Just disable the health analyzer rule (Central Admin -> Monitoring -> Review rule definitions), and monitor free space on C: via another method that works for you. This amount of free space is only required for full memory dumps, which out of the box, Windows is configured for kernel/small memory dumps only (usually appropriate).


    Trevor Seward

            

    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

    Wednesday, July 15, 2015 8:22 PM
    Moderator
  • This is a virtualized SBS2011 with 16GB RAM. C drive capacity is 200GB with 86,5GB free. So this should according to the rules (5*16=80) be sufficient for 5 kernel dumps. There is 18,6 GB data on F:, and with 40GB I got an SP error, with 60GB I got a warning, and I had to increase it to 120GB to get rid of the message. So now I have a 120GB user disk with just 18,6 GB data.


    MWebjorn

    Thursday, July 16, 2015 10:48 AM
  • How come that it needs 80GB to make a 16GB memory dump? As far as I know when a dump occurs, it overwrites the last dump.

    MWebjorn

    Thursday, July 16, 2015 10:50 AM
  • I agree with Trevor: this isn't a significant rule violation to worry about, and disabling the rule will not cause problems.  Some additional comments.

    • Since it's virtualized, check the server's memory configuration in your virtual manager.  your virtualization may have given it more memory than you originally configured.
    • Virtual server memory can be configured as static and dynamic.  If dynamic, it will fluctuate over time, creeping up.  My farms reside on HyperV 2012.  For example, the settings for a farm application server are: Startup: 12288; minimum: 12288; maximum: 32768 MB.  Compare this with what the server actually sees:.
    • This server actually see 28.9 GB memory at present.  The server has a 160 GB system drive, of which 86.1 GB is free.  This farm has a single application server, on which all services are instantiated and running.  As expected, the "Drives are at risk of running out of free space" rule violation is appearing in farm resports.  This will go away at the next patching operation, which will involve restarting the server and which resets VM memory back to the starting point of 12 GB.  It will slowly start to creep up again, and as the next patching window draws near, I'll see this rule violation appear again. 
    • Because it's insignifcant, and goes away eventually, I haven't given it much attention.  I am however, researching optimal memory configurations for all farm servers and intend to switch memory configuration from dynamic to static. Static memory configuration is recommended by Microsoft for those SharePoint farm servers also hosting AppFabric (see reference below).  Also, I doubled system disk size given my observations over time of how application server disk usage increased as I added services.  I recommend minimum 160 GB system disk for all mid-size farms.

    References

    Thursday, July 16, 2015 3:05 PM
  • I also hate this warning, it alerts if your free disk space is less than 5 x the physical memory of the server.  Utterly mental!  Critical is "x 2" which is potentially still loads of space.

    I went down the line of this: http://www.vspbreda.nl/nl/2013/11/sbs2011-the-sharepoint-health-analyzer-detected-an-error-drives-are-running-out-of-free-space-available-drive-space-is-less-than-twice-the-value-of-physical-memory-solved/

    Which is disable the disk space monitoring in SharePoint and monitor it via a different method.

    Thursday, July 16, 2015 3:13 PM
  • To add to the dynamic memory, it isn't supported with SharePoint 2013 period (makes support cases easier, they don't have to parse it out by functionality). That said, it is both AppFabric and Search that are the issues with dynamic memory.

    Trevor Seward

            

    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

    Thursday, July 16, 2015 3:42 PM
    Moderator
  • Trevor, I've read that AppFabric has issues with dynamic memory but haven't seen much yet about Search also having issues with this.  The TechNet article I reference above touches on Search very briefly. Can you elaborate or point me to a more focused posting or article on Search issues associated with dynamic memory?
    Thursday, July 23, 2015 2:36 PM
  • Trevor, I don't know what your position is within your company, but managing the SBS2011 is not the only issue I have to attend to within my company. And having to keep track of what is a "good" warning and "bad" warning gives me the creeps! Please note that this is not the only warning/error that turns out (after a gazillion hour of debugging) to be considered as "good". There are a bunch of others originating in SP (such as VSS problems) which is filling my logs with what is said to be "good" warings/errors.

    I therefore agree with Ian that I hate these warnings. Their mere existence are causing me lots of extra work just becuase I have to check what is going on. And what I can't understand is why the SP check is so unintelligent and just looks at all drives when there is information in the system which drive is being used for crash dump.

    To me it smells like a lazy/incompetent SP programmer who just hasn't bothered in making a proper solution, which now is causing the end users a lot of extra work.


    MWebjorn

    Monday, July 27, 2015 7:08 AM
  • Stephan, noderunner cannot take memory adjustments into account. 

    Trevor Seward

            

    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

    Monday, July 27, 2015 1:41 PM
    Moderator