none
server 2012 hyper-v vSwitch performance is terrible

    Question

  • Running an HP DL380 G5 single quad core with 8GB of ram.

    OS is on RAID-1 146GB 10k rpm SAS

    VM's boot from EqualLogic SAN on 24 900GB 10k rpm SAS via iSCSI at 1Gb/sec (as a test lab)

    Host OS is Server 2012 with hyper-v role (all updates)

    Client VM's are all server 2012

    I have been running bandwidth tests between VM's on the same node and have noticed that no matter what I do I cannot get the vSwitch to push more than about 2.0Gb/sec between VM's. I am using iperf as a test to generate bandwidth.

    I have tried between 2 VM's, I have tried 3 VM's pushing to 1 VM, I have tried 2 to 2, but no matter how I structure the tests the overall aggregate bandwidth between all the VM's never exceeds ~2Gb/sec over the vSwitch.

    Suppoesdly the vSwitch is a virtual 10Gb/sec adapter so I'm wondering why I can only push 20% of that traffic. I'm using iperf because it doesn't require disk I/O to test bandwidth because I want to eliminate the client OS or the SAN as the bottleneck.

    Can anyone provide help on getting more performance out of the vSwitch?

    Monday, November 19, 2012 2:26 PM

All replies

  • Hi,

    Which kind of network do you assigned to VMs? Internal network, External network or Private network?

    Also have you update Integration services for these guest virtual machines?

    You may also check the following post and blog or try other performance test tools:

    Hyper-V: Virtual Networking Survival Guide
    http://social.technet.microsoft.com/wiki/contents/articles/151.hyper-v-virtual-networking-survival-guide-en-us.aspx

    Optimizing Performance on Hyper-V
    http://technet.microsoft.com/en-US/library/dd722835(v=BTS.10).aspx
    How to improve Virtual Server Performance
    http://support.microsoft.com/kb/555975

    Hope this helps!

    TechNet Subscriber Support

    If you are TechNet Subscription user and have any feedback on our support quality, please send your feedback here.


    Lawrence

    TechNet Community Support

    Tuesday, November 20, 2012 6:46 AM
    Moderator
  • Are you using Network Adapters or Legacy Network Adapters in the VMs?  What operating systems are you running in the VMs?

    And, just FYI, the 10 Gbps is just a character string.  I have seen nearly 40 Gbps pushed through a single switch using a 40 GE NIC.  The actual speed is limited primarily by the NICs connecting to the switch.


    tim

    Tuesday, November 20, 2012 3:52 PM
  • I have tried both External as well as Private NIC's.

    Please read the thread as I have mentioned that the VM's and Host server are both Server 2012. If I attempt to reinstall the integration tools it pops up a box saying they are already up to date.

    I am using standard virtual network adapters. I am not using Legacy NIC's.

    "How to Improve Virtual Server Performance" was written in 2007 and clearly states it applies only to Server 2003. (Although most of the suggestions are basic anyway)

    "Optimizing performance on Hyper-v" is about a BizTalk 2009 server and mostly deals with ways to setup the VHD for the VM.

    "Hyper-v : virtual networking survival guide" deals mostly with vlans and DMZ's which my article is not about.

    Please go back and re-read my post. None of the hardware resources both memory and CPU and disk I/O are being pegged when trying to do these transfers on either the VM's or the host hypervisor. In my mind there should be no reason why the vSwitch can't push more traffic.

    Wednesday, November 21, 2012 2:12 AM
  • Hi,

    The Hyper-V vSwitch speed depends on a lot of factors. Please refer to the following posts which discussed the similar issue

    HYPER-V Guest shows 10Gig Connection but Transfer Rate to Other Machines on GigE Network Is 100Mb or less

    http://social.technet.microsoft.com/Forums/en-US/winserverhyperv/thread/27b8312f-b7e8-4381-9274-3ec71266bd2a

    Windows 2008 network file transfers EXTREMELY slow

    http://social.technet.microsoft.com/forums/en-US/winservergen/thread/e807a6b5-5602-4600-ab4e-2e2057d2fc77

    Flaky and slow networking - Yukon Marvel 88e8056

    http://social.technet.microsoft.com/Forums/en-US/windowsserver2008r2virtualization/thread/8aeb63b7-e368-457d-bd41-fe55d0caded4

    Hope it helps!


    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

    Wednesday, November 21, 2012 9:49 AM
  • Sigh. Yet again posting more nonsense articles that have nothing to do with my question.

    "Hyper-V Gust shows 10Gig but transfer on GigE network is 100mb or less" - this person is asking about transfer speeds between a VM and a host server going across a physical wire. This has absolutely nothing to do with my scenario.

    "Windows 2008 network file transfers extremely slow" - yet again you people are not reading my post. Let me make this clear. My client VM's and hyper-v host are running 

    Windows Server 2012

    "flaky and slow networking - yukon marvel" - I am not using Marvel NIC's. I am using Intel (not that it matters because this is a Hyper-V vSwitch issue since my traffic is never leaving the vSwitch on the host.
    Wednesday, November 21, 2012 2:28 PM
  • Hi,

    We suggest use built in tool Performance Monitor to test the network transfer between VMs. Also, the network performance between VMs also depend on processors.

    Test network file transfers – Copy a 100MB file between virtual machines and measure the length of time required to complete the copy. On a healthy 100Mbit (megabit) network, a 100MB (megabyte) file should copy in 10 to 20 seconds. On a healthy 1Gbit network, a 100MB file should copy in about 3 to 5 seconds. Copy times outside of these parameters are indicative of a network problem. One common cause of poor network transfers occurs when the network adapter has “auto detected” a 10MB half-duplex network which prevents the network adapter from taking full advantage of available bandwidth.

    For more details about measuring network performance, please refer to:

    Measuring Performance on Hyper-V

    http://technet.microsoft.com/en-us/library/cc768535(BTS.10).aspx


    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

    Thursday, November 22, 2012 10:00 AM
  • @Ruby, please stop replying to this post. You are not offering anything that is helpful. I am not trying to determine if I can copy at 100mbps or even 1gbps, I am trying to determine how to get 3, 4, 6, 8 Gb/sec out of VM transfers. This cannot be accomplished via file copy since a file copy is dependent on disk I/O. You obviously are not understanding what the problem is so please allow someone that actually understands to comment.

    I am using iperf since it generates traffic completely within memory without the need for read write operations giving me the ability to test if I am actually able to push multi gigabit loads between the vSwitch.

    I have used both task manager's built in network monitoring graphs as well as performance monitor to measure the throughput during my tests. Neither report anything higher than about 2.0Gb/sec no matter how many parallel threads I use to generate traffic.

    Thursday, November 22, 2012 10:08 PM
  • I am seeing a similar issue on my VMs but I am running 2008R2sp1 on my host and VMs as well...

    both VMs are on the same host and same network and I am only seeing 400-600 Mbits/sec   

    I should see at the very least gig on a 10GB virtual switch so I agree that seems odd...

    You could try rerunning the "Integration Services Setup" and let it rebuild the HAL and everything...

    I know you are running all 2012...  ;)
    Wednesday, November 28, 2012 9:03 PM
  • @Ruby, please stop replying to this post. You are not offering anything that is helpful. I am not trying to determine if I can copy at 100mbps or even 1gbps, I am trying to determine how to get 3, 4, 6, 8 Gb/sec out of VM transfers. This cannot be accomplished via file copy since a file copy is dependent on disk I/O. You obviously are not understanding what the problem is so please allow someone that actually understands to comment.

    I am using iperf since it generates traffic completely within memory without the need for read write operations giving me the ability to test if I am actually able to push multi gigabit loads between the vSwitch.

    I have used both task manager's built in network monitoring graphs as well as performance monitor to measure the throughput during my tests. Neither report anything higher than about 2.0Gb/sec no matter how many parallel threads I use to generate traffic.

    on gen7 blades i have no problem using 4gb/s for file copy between two blades and the limit there is the storage not the Network i have yet to find a free Tool that can measure the same bandwith between the two hosts , so i can verify that i get 5ish gb/s but most free monitoring tools wont go past 1gb/s


    my blog is at http://flemmingriis.com , let me know if you found the post or blog helpfull or leaves room for improvement

    Wednesday, November 28, 2012 10:41 PM
  • HI Flemmings

    I have exactly the same problem now. Do you find any solution?

    On the same Host Server 2008 R2 runnig on the same Network with full power!!

    Thanks in advanced.


    Roendi

    Tuesday, February 26, 2013 3:15 PM
  • If anyone seeing this has teamed their NICs or is wondering what could cause iPerf to show that bandwidth is less than expected of the HyperV VSwitch, see hyperv.nu's testing in Server 2012: http://www.hyper-v.nu/archives/marcve/2013/01/nic-teaming-hyper-v-switch-qos-and-actual-performance-part-3-performance/

    He found that enabling teaming and creating a Vswitch disbled RSS and enabled VMQ by default. Another logical processor number was assigned as BaseVMQProcesser other than the default processor core and loaded it at 100% with the others. Changing this value to the default processor number of 0 redistributed the load.

    The explanation makes sense to me. As the cry0nk0r explained he is not facing a NIC issue but he would be CPU bound for the scenario he described. I would be surprised if this wasn't related to the issue should teaming have been involved. If not a Connect bug should definitely be opened for this.

    Friday, March 15, 2013 4:45 AM
  • By me it was McAfee

    On all Hyper-V Client not Host the Performance was horrible. The Maschine was completle stopping.

    Maby it helps some Users


    Roendi

    Friday, March 15, 2013 2:07 PM
  • @cyr0nk0r have you tried to install Windows 2003 to confirm that it isn’t a OS-problem? :p

    Seriously, we had the same symptom on our 2008R2 and 2012 VM:s running on hyper-v 3.0. We did not get throughput using iperf higher than 950 Mbit.

    Our problem was that GFI Viper antivirus installed a NDSI driver on the NIC. Disabling this increased throughput to ~4 Gbit.

    One other problem with that NDIS driver was that network on VM went down when live migration in cluster or shared nothing. 

    Friday, March 22, 2013 8:20 PM
  • Verify in your Server NIC if have a option called " Virtual Machine Queue" and disable it.

    The Broadcom have an incompatibility with the Hyper-V in Windows Server 2012, but your NIC maybe have the same configuration:

    http://fundamentallygeek.blogspot.com.br/2012/11/slow-network-access-within-virtual.html

    • Proposed as answer by nqkbl Thursday, April 25, 2013 6:00 AM
    Wednesday, April 3, 2013 3:53 PM
  • Verify in your Server NIC if have a option called " Virtual Machine Queue" and disable it.

    The Broadcom have an incompatibility with the Hyper-V in Windows Server 2012, but your NIC maybe have the same configuration:

    http://fundamentallygeek.blogspot.com.br/2012/11/slow-network-access-within-virtual.html

    Paulo....You are the man of the hour.  I had tried everything and came across you answer and it solved all my slowdowns!  Thanks for taking the time to post an answer!

    Thursday, April 25, 2013 6:00 AM
  • Verify in your Server NIC if have a option called " Virtual Machine Queue" and disable it.

    The Broadcom have an incompatibility with the Hyper-V in Windows Server 2012, but your NIC maybe have the same configuration:

    http://fundamentallygeek.blogspot.com.br/2012/11/slow-network-access-within-virtual.html

    Paulo....You are the man of the hour.  I had tried everything and came across you answer and it solved all my slowdowns!  Thanks for taking the time to post an answer!


    I second that that Paulo is the MAN , so far I think you solved my problem as well.   Put this in your bag of tricks for HyperV and 2012 server for sure if you use Broadcom adapters.
    Thursday, May 23, 2013 2:45 PM
  • I'm having a similar problem - I have Broadcom 578x0 NICs and am using the following command to test bandwidth between VMs on separate hosts that are connected via PowerConnect 8132 10GbE switches:

    host:
    iperf -l 9000 -s

    client:
    iperf -l 9000 -c [servername]

    I'm seeing speeds of about 3.3Gb/s, and disabling VMQ on both the physical adapters and on the virtual switch did not affect the speeds one way or another. What speeds have you been able to achieve via iPerf on your broadcom NICs?

    Wednesday, August 7, 2013 8:35 PM
  • Have simular issues. I have 2x 10 GB/s for Server Connectivity (VSwitch and virtual Adapters for Management and Heartbeat), 2x 10 GB/s for Livemigration and 2x 10 GB/s for iSCSI Storage. All Adapters are teamed. I performed some Tests via Iperf and get following outputs:

    Server Network (Management) 3,5 GB/s (Broadcom)

    Heartbeat 3,9 GB/s

    Livemigration 4,1 GB/s (Emulex)

    iSCSI Storage Network 12,4 GB/s (Broadcom)

    VM to VM (connected through VSwitch) 4,1 GB/s

    I wonder why the throughoutput is so low, only the Storage Network reaches great performance. I expirementet with serveral minimumbandwidthweights and defaultflowbandwidths but nothing brings a relevant change into game. I will try tomorrow to disable VMQ and test again.

    Thursday, August 15, 2013 4:58 PM
  • As soon as you enable the Vswitch the RSS function on the network adapter is disabled and performance will drop to 3-4 GB/s even if you have 2 x 10 GB/s teamed adapters. This goes for your vm to vm network.

    Espen

    Sunday, November 10, 2013 9:09 AM
  • one Importend Point is that G5 Model are not on the Support Matrix for WS2012, I also wasted some time playing with G5 ans Nic Teaming. so best Advice is to use only tested and certified Devices :-)
    Sunday, November 10, 2013 11:52 AM
  • OK this solved my problem on Proliant DL360G8. Thx Man

    Friday, December 27, 2013 7:13 PM
  • Make sure you have the latest drivers for your NIC card.  I was running into network latency issues on my Dell R720xd server because I left the default drivers that Server 2012 R2 provides for the network card.  Whenl I downloaded and installed the latest Broadcom drivers it became smoking fast!
    Thursday, June 12, 2014 8:44 PM
  • Hi,

    Sorry to re-open the cut!

    Did you find a solution at your problem JakesterPDX?

    Do you have a feedback to share please?

    Thank

    Raf

    Monday, July 20, 2015 5:26 PM
  • BGinfo even lists the network connection speed at 4GB/s. 
    Is there any way around the VM to VM 4GB/s limit?

    Thursday, October 22, 2015 8:17 PM
  • I have had the same issue, wasted a day trying everything and then found your post. Solved the issue straight away. Fantastic and thanks.

    This is my first post but I had to say thank you

    Monday, November 16, 2015 6:37 PM
  • Usually, you will need to enable vrss to reach 10gbps within a single vm. multiple cores will also be required.

    This will need to be enabled within the virtual machine.

    see: https://technet.microsoft.com/en-us/library/dn383582.aspx

    Thanks

    Sam

    Tuesday, November 17, 2015 1:41 AM
  • You just saved my morning - having same issue with Broadcom NIC's on new Dell server and disabled virtual machine queue on adapter settings - throughput just increased 10x.  Thank you!!

    Sean

    HNF Tech

    Friday, July 7, 2017 1:39 PM
  • You just saved my morning - having same issue with Broadcom NIC's on new Dell server and disabled virtual machine queue on adapter settings - throughput just increased 10x.  Thank you!!

    Sean

    HNF Tech


    Can you please let me know how your nic settings look like? Everything about VMQ disabled? all about RSS disabled?
    Monday, July 10, 2017 11:46 AM