HyperV 2008 R2 Cluster with 2 nodes.
Host to Host traffic is as expected at 75-100MB/s. Inter VM traffic even when on the same host was at 5-6MB/s.
I installed latest firmware/drivers for all host nics. Disabled TCP Connection Offload, large send offload, RSS and Chimney globally using netsh, and Virtual Machine Queues. The speed improved to 20MB/s but this is still drastically slow.
We are also experiencing this on other 08 R2 cluster's.
Has anyone experienced this and found a solid solution? I have followed a lot of the other threads but no one seems to have a definitive answer/solution for this?
Do you have the most current integrated services installed on the VMs, windows update may update the host and components but will not update the VMs.
I assume you aren't using the legacy network drivers.
- Edited by Darren Blanchard Tuesday, November 05, 2013 12:53 PM added data
Has the problem been solved ?
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.
This is still an issue. I have just deployed the hotfix and no improvement.
I am also seeing these issues with Server 2012 VM's.
I have runup a Server 2012 R2 Hyper-V Cluster as well to test and getting similar performance at around 40Mb/s. There has to be a reason for this? Does Hyper-V implement some sort QoS on VM network transactions to prevent a single VM from saturating the link?
I could possibly understand if this was leaving the host and actually touching the physical network but even VM to VM on the same host is having issues. Why does this NIC say it is connected at 10Gb if this throughput can't be achieved?
Does the interVM traffic on the same host not utilise the hosts RAM for this?
Surely I can't be the only one with this issue, and if other people are getting better performance someone must be able to point me in the right direction.
All and any help appreciated,