none
Why are my hyper-V network connections so slow?

    Question

  • Hello,

    I have a Dell PowerEdge T620 with Intel I350 gigabit nics.   The server and the nics have all the latest firmware and drivers.

    I have each VM on its own dedicated network Virtual switch so there  is a quad nic in the server and each VM has a dedicated nic.

    The VMs say they are connected at 10Gbit.

    When I try to copy between 2012 servers (VM to VM or VM to Host) the fastest I get is about 50 MB/s

    When I try to copy between 2012 and a 2003 VM to VM I get about 2 MB/s.   The 2003 server has the latest integration services installed).

    I have only IPV4 enabled.  We are not using IPV6 so it is disabled.

    The server that they are on has VERY fast arrays (two RAID 50 - different controllers).  If I copy from one array to another I can see about 500 plus MB/s

    Thursday, July 18, 2013 2:29 PM

All replies

  • Hello,

    I've run in similar issue on a powerEdge R620 (don't have NIC type at hand) 3 node cluster, while using NIC teaming on the VM nics.

    My issue was with VMDq acceleration, that had to be turned off he NIC's driver page, o gain decent transfer speeds.

    Ciao,

    Claudio


    MCSA, MCSE, MCT, MCITP:EA


    • Edited by ClaudioG64 Thursday, July 18, 2013 4:15 PM minor typos corrected
    Thursday, July 18, 2013 4:05 PM
  • can u check weather youre using legacy network adapter with windows 2003

    Darshana Jayathilake

    Thursday, July 18, 2013 5:15 PM
  • Thanks is that VMDQ acceleration under network settings on the host server on the network card settings, withing hyper V manager on the host, or within the network card settings on the virtual machine running within the virtual OS?
    Thursday, July 18, 2013 6:06 PM
  • can u check weather youre using legacy network adapter with windows 2003

    Darshana Jayathilake

    Hello,

    Within the VM 2003 server, if I go to Local Area connection properties it says

    Connect using "Microsoft Hyper-V Network Adapter"

    Thursday, July 18, 2013 6:12 PM
  • Hi,

    Please try using Legacy network adaptor type for Windows Server 2003.


    Thanks and Regards
    David Shen
    MVP (File System Storage)
    Microsoft Storage Team - File Cabinet Blog
    View my MCP Certifications
    My Blog
    Contact me

    Friday, July 19, 2013 2:40 AM
  • Hello,

    I got a huge amount of help from a friend.   The Large Send Offload is enabled by default.  I disabled that on a couple of servers and the throughput increased dramatically.

    Between VMs on the same nic, speed is pretty good 130MB/s (not nearly as fast as copying between drives on the host server).   Between VMs on different nics, performance is OK but still leaves plenty of room from improvement 70 MB/s.  

    I tried turning on jumbo frames - made it a little slower.   I tried increasing the send and receive buffer size - not much difference.

    • Edited by boe_d Friday, July 19, 2013 4:38 AM text
    Friday, July 19, 2013 3:16 AM
  • Hi,

    Please try to disable all instances of TCP Offloading and Receive Side Scaling on the hosts’ networks to check the result first.

    If the issue persists, please also try to following update:

    Performance decreases in Windows Server 2008 R2 when the Hyper-V role is installed on a computer that uses Intel Westmere or Sandy Bridge processors

    http://support.microsoft.com/kb/2517329/en

    If it still cannot work, please also refer to the following blog for more information:

    Network Issues with Windows Server 2008 RDP and VS/Hyper-V on Dell Servers

    http://www.petri.co.il/network-issues-with-windows-server-2008-rdp-on-dell-servers.htm

    Regards,


    Arthur Li

    TechNet Community Support

    Friday, July 19, 2013 4:49 AM
  • If I disable Large Scale Offload on the host server nic, performance degrades significantly.   If I turn LSO on the host server nics used by the VMs it doesn't change much -  a little slower - about 15%.   Receive side scaling is disabled by default on the intels and so I left it disabled. It is only by disabling the LSO within the VMs I've been able to gain any speed.

    Unfortunately,  I just tried the firmware update from Dell and after applying it turning off and on LSO doesn't help anymore and I'm going from 60MB/s before the firmware update down to about 12MB/s after the firmware update.  I've tried a few drivers, no improvement.

    • Edited by boe_d Friday, July 19, 2013 3:42 PM update
    Friday, July 19, 2013 1:28 PM
  • Yes,

    the Virtual Machine device queues setting is on the driver settings page, for the host NIC.

    Ciao,

    Claudio



    MCSA, MCSE, MCT, MCITP:EA

    Friday, July 19, 2013 1:31 PM
  • Yes,

    the Virtual Machine device queues setting is on the driver settings page, for the host NIC.

    Ciao,

    Claudio



    MCSA, MCSE, MCT, MCITP:EA

    Thanks - under the host VM nic settings I find Under advanced, Virtualization (doesn't say vmdq) - is that what you'd like me to disable? I'm guessing not as that doesn't seem to help the performance in my case.

    • Edited by boe_d Friday, July 19, 2013 3:43 PM test
    Friday, July 19, 2013 2:59 PM
  • I did find that for some reason the Symantec Firewall in Symantec Endpoint decreased the speed by about 25%.  Since I have a real firewall on my network, I feel comfortable enough to disable the Symantec firewall. My speed is acceptable now.
    Saturday, July 20, 2013 6:42 PM
  • Hi,

    I am Chetan Savade from Symantec Technical SUpport team.

    Couldy you please help me to understand what steps you have performed to monitor the speed.

    Even though you have real firewall it's important to have end machines installed with software firewall to have 100% protection.

    If not using the latest version of SEP then I would like to recommend to use that.

    Regards,

    Chetan Savade

    Monday, July 22, 2013 1:51 PM
  • Hello,

    I just ran into this again for a new client running 2012 R2 and 12.1.4103 SE

    I tried repeated file copies of large files (e.g. 4GB isos) from the host to the hyper-V VM machines.  

    Things I tried before hand

    On the host - try disabling large send offload on the physical nic and the nic used for the virtual switch.

    I also adjusted the VM settings through the host to enable/ v manager.

    Enable / disable SR-IOV - although I didn't putting anything on the VM to take advantage of it.   

    On the VM - disable large send offload, enable/disable jumbo frames.

    I finally just disabled Symantec Firewall - left virus and spyware protection enabled, left proactive threat protection enabled - performance was good after disabling Symantec firewall.    It was about 35% faster and more steady.

    Send me an email if you like - boe_d   at Hotmail dot com

    Thursday, November 14, 2013 7:07 PM
  • FYI - another IT guy I work with in Texas and said his boss was saying their RD server is slow (it is a Hyper V - VM) so I said I bet I know the issue - he just made the change and claimed it doubled his speed.    I wasn't watching so I'd be surprised if it truly doubled it. My speed would average about 45-50 and without it would go to about 95.
    • Edited by boe_d Thursday, November 14, 2013 7:44 PM
    Thursday, November 14, 2013 7:34 PM
  • I've tested with Windows native firewall - it works fine on hyper-v.
    Thursday, November 14, 2013 10:03 PM
  • I ran into this same issue today running on server 2012 R2 with a Dell R620.  I installed the lasted broadcom drivers that can be found here: http://www.broadcom.com/support/ethernet_nic/netxtreme_server.php

    My speeds are now back to normal. I would suggest  going that route if you are still heaving this issue. 

     
    Tuesday, November 19, 2013 7:30 PM
  • Thanks. While drivers and offload settings can help performance, disabling Symantec firewall on the VMs gave the most boost in performance. I'm just hoping to hear back from Symantec on this issue as Windows native firewall does not have this problem.
    • Edited by boe_d Wednesday, November 20, 2013 8:38 PM
    Wednesday, November 20, 2013 8:37 PM