Currently we have 2-node SQL 2008 R2 cluster.
Each node is Fujitsu Blade with 36GB RAM.
On each node we allowed SQL server 28672 MB of RAM. Now on one of servers we need to allocate more RAM (for the start I think 30720MB if need I guess we can go with 32768 MB???).
Will ther be any kind of downtime? Can I change setting on one node, and then on another? What is the right way to do this?
Any kind of advice, since this is first time, I 'm doing this on cluster...
Thanks in advance
Are you saying you need to change the maximum memory setting on your cluster depending on which node it is currently running on? Is there a reason for doing this?
This can be done by running a TSQL script to check the hostname during startup. Based on the hostname, you can change the maximum server memory setting to the appropriate value you want.
We have 2 physical servers, and on each one of them we have a different SQL server instance. Both are SQL 2008 R2 and they are in cluster.
One of the instances is currently very RAM hungry and in couple of days our developers will be adding new databases, creating new indexes etc....
So I need info about changing max memory value from 28 GB to for starters 30GB. Do I need to change that value on both servers or I can do it just on problematic server?What is the proper way to do this in cluster environment?
If one node A fails over node B that means Node B has 34 GB but Node A has MAX memory 30 GB and Node B has 28GB , you see the problem? You need to make sure that when node A failes over Node B you need to adjust MAX memory param to ensure using memory accordingly.
Best Regards,Uri Dimant SQL Server MVP,http://sqlblog.com/blogs/uri_dimant/
- Marked as answer by Maggie LuoMicrosoft community contributor, Moderator Sunday, July 08, 2012 5:18 PM
- Unmarked as answer by DamirN Wednesday, July 25, 2012 9:11 AM