none
Disk caching on host or guest? RRS feed

  • Question

  • OK, this is probably a noob question, but if we have 64GB RAM on our HyperV (2008R2) host, and we are running disk intensive software, do we:

    a) Allocate the 'minimum' RAM to the guest, and leave the rest for the host to use for disk caching, or

    b) Allocate the maximum RAM to the guest (leaving 1GB for the host), and let the guest use it for disk caching?

    Allocating half & half would seem to be a waste as they will probably both end up caching the same data (will they?), but it's not clear whether we're best letting the host or the guest do the caching. Or does it actually matter at all?

    I've had a good look around and haven't been able to find any relevant recommendations.

    More Info - the 'disk intensive' software is mainly a PostgreSQL server. We'll give that about 8GB for its shared buffers, but it seems to be recommended to use OS disk caching beyond that. There is a 1GB BBWC P420i RAID controller so write caching is performed on that. Currently, our biggest performance bottleneck seems to be due to uncached reads, so we are increasing the host RAM from 16GB to 64GB (and adding an SSD for index storage), but just want to know whether it's best to increase the guest RAM allocation, or leave it 'spare' on the host.

    Monday, February 10, 2014 10:29 AM

Answers

  • OK, this is probably a noob question, but if we have 64GB RAM on our HyperV (2008R2) host, and we are running disk intensive software, do we:

    a) Allocate the 'minimum' RAM to the guest, and leave the rest for the host to use for disk caching, or

    b) Allocate the maximum RAM to the guest (leaving 1GB for the host), and let the guest use it for disk caching?

    Allocating half & half would seem to be a waste as they will probably both end up caching the same data (will they?), but it's not clear whether we're best letting the host or the guest do the caching. Or does it actually matter at all?

    I've had a good look around and haven't been able to find any relevant recommendations.

    More Info - the 'disk intensive' software is mainly a PostgreSQL server. We'll give that about 8GB for its shared buffers, but it seems to be recommended to use OS disk caching beyond that. There is a 1GB BBWC P420i RAID controller so write caching is performed on that. Currently, our biggest performance bottleneck seems to be due to uncached reads, so we are increasing the host RAM from 16GB to 64GB (and adding an SSD for index storage), but just want to know whether it's best to increase the guest RAM allocation, or leave it 'spare' on the host.

    With Windows Server 2008 R2 / Hyper-V 2.0 you don't have that many options as VHD access is not cached by host. At all... So you'd better allocate move VM memory as I/O would be cached inside a VM. Windows Server 2012 R2 / Hyper-V 3.0 would give you more caching options that include Read-Only CSV Cache, Flash-based Write-Back Cache coming with Tiering and also SMB access is also extensively cached @ both client and server sides. See:

    CSV Cache

    http://blogs.msdn.com/b/clustering/archive/2013/07/19/10286676.aspx

    Write Back Cache

    http://technet.microsoft.com/en-us/library/dn387076.aspx

    Hyper-V over SMB

    http://technet.microsoft.com/en-us/library/jj134187.aspx

    So it could be a good idea to upgrade to Windows Server 2012 R2 now :) 

    You may deploy third-party software to do a RAM and flash cache but you need to think twice as it could be simply dangerous - no reboot you may lose gigabytes of your transactions...

    Hope this helped a bit :)


    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

    • Proposed as answer by Alex LvModerator Tuesday, February 11, 2014 9:11 AM
    • Marked as answer by pscs Tuesday, February 11, 2014 10:00 AM
    Monday, February 10, 2014 1:06 PM

All replies

  • OK, this is probably a noob question, but if we have 64GB RAM on our HyperV (2008R2) host, and we are running disk intensive software, do we:

    a) Allocate the 'minimum' RAM to the guest, and leave the rest for the host to use for disk caching, or

    b) Allocate the maximum RAM to the guest (leaving 1GB for the host), and let the guest use it for disk caching?

    Allocating half & half would seem to be a waste as they will probably both end up caching the same data (will they?), but it's not clear whether we're best letting the host or the guest do the caching. Or does it actually matter at all?

    I've had a good look around and haven't been able to find any relevant recommendations.

    More Info - the 'disk intensive' software is mainly a PostgreSQL server. We'll give that about 8GB for its shared buffers, but it seems to be recommended to use OS disk caching beyond that. There is a 1GB BBWC P420i RAID controller so write caching is performed on that. Currently, our biggest performance bottleneck seems to be due to uncached reads, so we are increasing the host RAM from 16GB to 64GB (and adding an SSD for index storage), but just want to know whether it's best to increase the guest RAM allocation, or leave it 'spare' on the host.

    With Windows Server 2008 R2 / Hyper-V 2.0 you don't have that many options as VHD access is not cached by host. At all... So you'd better allocate move VM memory as I/O would be cached inside a VM. Windows Server 2012 R2 / Hyper-V 3.0 would give you more caching options that include Read-Only CSV Cache, Flash-based Write-Back Cache coming with Tiering and also SMB access is also extensively cached @ both client and server sides. See:

    CSV Cache

    http://blogs.msdn.com/b/clustering/archive/2013/07/19/10286676.aspx

    Write Back Cache

    http://technet.microsoft.com/en-us/library/dn387076.aspx

    Hyper-V over SMB

    http://technet.microsoft.com/en-us/library/jj134187.aspx

    So it could be a good idea to upgrade to Windows Server 2012 R2 now :) 

    You may deploy third-party software to do a RAM and flash cache but you need to think twice as it could be simply dangerous - no reboot you may lose gigabytes of your transactions...

    Hope this helped a bit :)


    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

    • Proposed as answer by Alex LvModerator Tuesday, February 11, 2014 9:11 AM
    • Marked as answer by pscs Tuesday, February 11, 2014 10:00 AM
    Monday, February 10, 2014 1:06 PM
  • Hi,

    Additional, you can refer the following KB for the more detail storage cache in Hyper-V guest system.

    The related KB:

    Hyper-V storage: Caching layers and implications for data consistency

    http://support.microsoft.com/kb/2801713

    Hope this helps.


    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

    Tuesday, February 11, 2014 9:11 AM
    Moderator
  • With Windows Server 2008 R2 / Hyper-V 2.0 you don't have that many options as VHD access is not cached by host. At all...

    Thanks, that's just what I needed to know.

    I may look at upgrading to 2012R2 as using SSD as write-back cache looks very interesting. I've been wary, because I didn't realise you could do an in-place upgrade.

    The servers are customer facing, so I need to minimise risk and downtime. We do daily backups, but it takes quite a while to do a backup (2TB), and for this we would need to prevent transactions after the backup has started. So, that would mean either we do the upgrade without a backup (:-O) or have a very long period of downtime (:-( )

    I suppose I could split the RAID 1 arrays and use a drive out of each as the backup. A bit risky, but nowhere near as risky as doing it without a backup at all.

    The preferred alternative would be to do the migration through another server, but we're a small company so buying another server (and datacentre rackspace to put it in) would put a strain on our budget - I'll have to think about that - it would give us a spare server.

    The related KB: Hyper-V storage: Caching layers and implications for data consistency http://support.microsoft.com/kb/2801713
    I had read that before asking the question. Unfortunately that article doesn't make clear whether VHDs are read-cached or not.
    Tuesday, February 11, 2014 10:00 AM

  • Thanks, that's just what I needed to know.

    I may look at upgrading to 2012R2 as using SSD as write-back cache looks very interesting. I've been wary, because I didn't realise you could do an in-place upgrade.

    The servers are customer facing, so I need to minimise risk and downtime. We do daily backups, but it takes quite a while to do a backup (2TB), and for this we would need to prevent transactions after the backup has started. So, that would mean either we do the upgrade without a backup (:-O) or have a very long period of downtime (:-( )

    I suppose I could split the RAID 1 arrays and use a drive out of each as the backup. A bit risky, but nowhere near as risky as doing it without a backup at all.

    The preferred alternative would be to do the migration through another server, but we're a small company so buying another server (and datacentre rackspace to put it in) would put a strain on our budget - I'll have to think about that - it would give us a spare server.

    1) Upgrade 2008 R2 -> 2012 R2 is pretty stright-forward and it definitely worth messing with :)

    2) Want to minimize downtimes - build a cluster and make workload guest VM clustered. 

    3) Disks installed on the same box are NOT a backup. For backup you absolutely need to offload data to some other location. Even "el cheapo" Netgear unit would do the trick. 

    4) Don't migrate thru the other server, just bring it in forever to have a Hyper-V cluster (see 2).

    Good luck!


    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

    Tuesday, February 11, 2014 10:59 AM
  • 2) Want to minimize downtimes - build a cluster and make workload guest VM clustered. 

    4) Don't migrate thru the other server, just bring it in forever to have a Hyper-V cluster (see 2).

    Yes, I'm looking into clustering at the moment, but (a) it's a bit new to me, and (b) how could we do that if some HyperV servers are 2008R2 and one is 2012R2? I'd have thought we'd need the same Windows version to set up a Hyper-V cluster.

    My way of thinking is that when we have a 2012R2 server set up, we migrate things onto that (we can do this with minimal downtime using DB replication etc). Now we have a blank 2008R2 server which we can upgrade to 2012R2, then migrate the VMs from our next server onto that, and so on. At the end we'll have a set of 2012R2 servers. We'll have an extra server which we can move some load around onto to make things better now, but still be able to use if one of the other servers fails.

    For clustering, the problem I foresee is that we don't have a SAN (and have no chance of being able to afford one!), and disk I/O is the main bottleneck, so putting all the disk I/O onto one SMB 3 server would probably cripple things (it's currently spread across multiple servers with DAS), and we'd have a single point of failure which would affect all our servers. Now if a server fails, only some of the guests go down. Hopefully we can salvage the disks containing the VMs and put them into one of the other servers temporarily (if not, then we have our backups). If all the files are on one big server, then if that fails, we lose all the guests, and our other servers won't be able to cope with all the disk load. Setting up a SOFS cluster looks as if it'd be expensive.


    3) Disks installed on the same box are NOT a backup. For backup you absolutely need to offload data to some other location. Even "el cheapo" Netgear unit would do the trick

    I know it's not a backup. The problem is that to do a backup takes hours. Normally this isn't a problem because we run the backups while the VMs are live (we use Altaro's HyperV backup software). In this case that wouldn't be good, as we want the backup to have the data at the point of doing the upgrade (I know it's meant to be safe, but I prefer to be safe rather than sorry). So, that means we'd need to stop the VMs, then do the backup, then do the Windows upgrade - which will take too long.

    Taking one of the RAID1 disks out just before the upgrade would give us an instant snapshot. The risk is that one of the disks would fail before the array had been fully rebuilt. Not that this matters now, because I've just ordered a new server and 2012R2 licence.

    Tuesday, February 11, 2014 5:21 PM