none
Memory management on 64-bit cluster using SQL 2005

    Question

  • I have read conflicting information on how to manage memory on a 64-bit (x64) cluster using SQL 2005 (Active/Active).  Which of the following should I do (or something completely different):

    1.  Should I leave memory management up to the SQL server (set the upper memory very high)?
    2.  Should I set the upper memory to about 2 gigs below the maximum memory the server that the instance resides?
    3.  Should I set the upper memory to about half (minus some) of the memory on the server (to account for fail over of the other instance)?
    Sunday, April 05, 2009 1:57 PM

All replies

  • If you leave memory management to sql server then sql will try and grab as much memory as it can if, the workload is there.
    If you don't set a limit, there will be little or no memory for the other instance to use in the event of a failover, not only that, but on 64-bit there could be contention with the OS as there is no constraint on how much memory sql can use.
    So, you need to set an upper limit regardless and that limit needs to take into account the needs of both instances running on the same node, in the event of a failover.
    If one instance has a higher priority than the other you can give the more important instance a higher ceiling.
    That's the general recommendation, which does mean unused memory on both nodes that will only get used in a failover situation. The alternative is that sql server fails to start on a failover node, as the existing instance has hogged all the memory.


    eraofdata.com
    Sunday, April 05, 2009 5:44 PM
  • Everything is an it depends. Have you tested under load to see what you actually need? That's what you really need to do to see what you need per instance.

    With clustering, you need to think of the failover condition - what happens when the instances need to coexist?

    I do not always recommend hard capping memory; that can be dangerous and limit you. My general recommendation to most of my customers is to set a minimum amount of memory so that in a failover, you know you always get that, and then let the instances sort it out up top. If you have contention after that, maybe set a maximum, but we're not in the old /PAE and AWE days where you had to set a maximium with 2000 and W2K.
    Allan Hirt Blog: http://www.sqlha.com/blog Author: Pro SQL Server 2008 Failover Clustering (Apress - due out June, 2009)
    Sunday, April 05, 2009 7:03 PM
  • Read Microsoft's Lessons Learned article on 64-bit sql server consolidation, particularly the "Managing SQL Server Memory Usage on a 64-Bit Platform" section. It's a bit out of date wrt the SQL Server version, but the underlying principles are version independent. It underlines how important it is to put a limit on memory usage in the 64-bit environment.

    ajmer dhariwal || eraofdata.com
    Sunday, April 05, 2009 8:30 PM
  • It's still an it depends. While you may wind up hard capping memory, it is usually predicated on doing testing to see what you actually need. Picking an arbitrary number, or, say, if you have two instances, giving each 40% (for a total of 80%) and reserving 20% for the OS isn't necessarily the right thing to do, either. If you've got an N+I cluster where there is a dedicate failover node, and you set all of your failover node preferences, you may not run into contention. You may, which is why you would consider capping, but with setting a minimum amount of memory, you'll guarantee that SQL won't dip below a certain point.

    No one has a crystal ball, and there are rarely absolutes when it comes to SQL Server in my experience (but there are some). No two companies, applications, etc. use SQL Server the same.
    Allan Hirt Blog: http://www.sqlha.com/blog Author: Pro SQL Server 2008 Failover Clustering (Apress - due out June, 2009)
    Monday, April 06, 2009 5:48 AM
  • I get the point about setting a min memory setting, but in a failover situation on an active/active cluster the failed over instance might have problems precisely because the original instance has not had its memory capped and therefore could have grabbed more memory than is available to meet the minimum memory requirements of the new failed over instance, meaning the new instance might end up using the swap file.
    Sql instances, being independent and unaware of each other, are not going to release memory to each other, only to the OS, and then only if the OS is under memory pressure.



    ajmer dhariwal || eraofdata.com
    Monday, April 06, 2009 6:07 AM
  • First of all, the terminology is wrong. It's a multiple instance failover cluster; active active went out with 7.0 even though people still use it.

    The  key word which you use as well is MIGHT ... you don't know. Part of the issue going back to the 32-bit 2000 days is that when you hard cap memory, you may limit what you can configure. You don't know UNTIL YOU TEST UNDER YOUR WORKLOAD. SQL Server *should* adjust memory dynamically after failover; it may not be instantaneous and it may be painful for a bit. That's where hard capping might seem like a better idea, but you also limit your long term scalability (which to some degree defeats the flexibility of full 64-bit deployments).

    There are NO absolutes here. Period. What works for one scenario does not mean it's the best thing for another.

    Without testing, it's just an educated guess.


    Allan Hirt Blog: http://www.sqlha.com/blog Author: Pro SQL Server 2008 Failover Clustering (Apress - due out June, 2009)
    Monday, April 06, 2009 9:07 AM
  • Then we'll just have to agree to disagree.

    Much as The SQL Guy has found under his own steam.

    I personally will always hardcap a 64-bit server as I've found situations where it causes memory contention with the OS, let alone other instances that may run alongside it and regardless of whether it's standalone or clustered. Period.


    ajmer dhariwal || eraofdata.com
    Monday, April 06, 2009 10:48 AM
  • You just proved my point. Based on your experience with workloads in your environment, you hard cap memory. But it is dangerous (and I am someone who always used to advocate hard capping 100% of the time) to make wide sweeping recommendations without testing based on your own workloads, not someone else's. There are other forces at work, too, which are political as well. Even if hard capping memory may be the right thing to do, it becomes a perception problem from others that there is wasted capacity so they'll have to learn the hard way after a problem to hard cap.

    There are pros and cons to every scenario. To paint it as black and white just isn't accurate.


    Allan Hirt Blog: http://www.sqlha.com/blog Author: Pro SQL Server 2008 Failover Clustering (Apress - due out June, 2009)
    Monday, April 06, 2009 11:21 AM
  • In a multiple instance cluster, its always better to have the memory capped for all the instances whether its 32 bit or 64 bit. This tradeoff of splitting the resources is certainly optimal at the time of failover and this was Clustering is really meant for. Otherwise, both your instances would get affected when failover happens.Also since you are in a 64 bit environment you need not worry about AWE and release of memory to OS, as it happens dyanamically on Win2k3 servers.

    And the percentage at which you want to cap the memory depends on the load on the servers and the total RAM. Better to leave 2GB for OS in 64 bit environment and divide the remaining between Instances.

    Regards, Sashi.
    Tuesday, April 07, 2009 5:26 AM