none
Very slow network performance with Intel NIC when TCP Large Send Offload is enabled

    Question

  • I have a Windows 2008 server with two NIC's (Intel PRO/1000 PL and Intel PRO/1000 PM), and one virtual external switch connected to each NIC.

     

    With the 01/14/08 network driver, I get very low throughput on both NIC's (~10KB/sec when downloading from my server when it should be around 11MB/sec).

     

    I've found that disabling "TCP Large Send Offload (IPv4)" on the virtual switch and "Large Send Offload v2 (IPv4)" on the physical adapters solves the problem.

     

    Is this a bug in the driver?

     

    Thank you,

     

    Ricardo Costa

    Tuesday, March 11, 2008 9:34 PM

Answers

All replies

  • This behavior is not unusual.

     

    TCP offloading appears to be problematic in most cases (except for the rare cases where the infrastructure actually makes use of it).

     

    It causes chunking of network traffic (you could also say it bursts).

     

    In cases without Hyper-V disabling offloading will generally improve network performance for most cases.

     

    Tuesday, March 11, 2008 11:43 PM
    Moderator
  • Hi,

    Thanks for this tip, it saved me many hours of trying to figure out why my external access to my hyper-v boxes was so slow!

     

    Thank you,

     

    Julian Dyer

    Friday, April 18, 2008 1:43 PM
  • Thanks for the info. Since we updated to Hyper-V RC0 we have had performance issues. It got to where accesses to the guest systems were at a crawl. Disabling Large Send Offload helped drastically.

    Friday, May 9, 2008 6:10 AM
  • Yes I had the same problems with the TCP Offload. This solves the problem at the physical machine.

     

    But I still have unbelievable slow performance at the virtual machines. I installed a Hyper-V Server. Every virtual machine I add is using one or two hard disks and one Network adapter.

     

    It is not possible to use remote desktop or other network applications. I even installed a completly new virtual machine - the same. But at this genius driver there are no options with TCP Offload. Adding the responsable value to the registry does NOT solve the problem (at HKLM\...CUrrentControlSet\Class\{....72}\Index). Using the hint with the EnableTCPA 0 and EnableRSS 0 even does not help.

     

    Managing the machine through the Hyper-V manager is not a problem.

     

    What to do?

     

    Edit:

    The problem seems to be solved. I tried to disable the TCP Offload at the Intel interface before and it didn't helped. But I tried it again (since now I only deaktivated the TCP Offload at the Virtual Interface which helped at the physical machine). After this the Remote Desktop connection looks stable and much more faster.

    Friday, May 9, 2008 10:45 AM
  • That has also solved my issue so thank you very much!

    Friday, May 9, 2008 1:34 PM
  •  

    What is the best way to disable TOE is server core?  This there a system wide setting or is it a setting on each NIC?

     

    Thanks,

    Jim

    Friday, May 9, 2008 6:23 PM
  • That has been approached....let me search...<time passes>

     

    Found it!

     

    http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=3273603&SiteID=17

     

     

    Friday, May 9, 2008 7:27 PM
    Moderator
  • Thank you veeeeeeeery much

    I disable that...thing too and all my problems are solved.

    Now DPM agent works perfect on WS 2008 with Virtual Server 2005

     

    Tuesday, May 13, 2008 10:20 PM
  • I wish that I had seen this post before I gave up and rebuilt my server this weekend with Virtual Server 2005 R2.  It may have saved me a lot of time.  I had seen the other posts about offloading, but they apparently do no go far enough as my problems got to be so bad that I was experiencing permanent data loss due to connections timing out when applications were saving state/database changes.  I'll give this a try again once I have some down time.

    Can anyone verify that this fixes the problems that everyone has been experiencing?  BTW, you do not need to create a virtual adapter to experience this.  Simply installing Hyper-V caused the physical adapters to exhibit this behavior and removing Hyper-V does not fix the issue.  I was only able to fix the issue with a clean install of the OS.  Also, this used to be one of the major issues with Virtual Server 2005 which was finally fixed about a year after R2 was released.
    Monday, March 30, 2009 6:46 PM
  • I am running Windows Server 2008 SP2 terminal server (32 bit).  Users connect via RDP.  We do not have a virtual environment.  Several users are losing their connection.  In addition, I receive many 4005 event IDs:

    ________

    Log Name:      Application
    Source:        Microsoft-Windows-Winlogon
    Date:          11/16/2009 5:52:59 AM
    Event ID:      4005
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      PCTS01.pindlercorp.network
    Description:
    The Windows logon process has unexpectedly terminated.
    ________

    I have 2 HP NC373i Multifunction Gigabit Server Adapters.  If I disable the IPv4 Large Send Offload feature, would that resolve the Event ID 4005?  In addition, how would one know if the IPv4 Large Send Offload feature is being utlized?

    Thanks.
    Monday, November 16, 2009 6:10 PM
  • My Hyper-V host shows 20Mbps internet speed.
    But my WindowsXP Guest on that machine shows less than 1Mbps speed.
    I have the guest using exacly half the resources (processors, memory).

    The physical network adapter on the host does not show the Large Send Offload property.
    The virtual network adapter on the host shows this, and I disabled it.

    The adapter on the guest does not show this property

    Friday, March 12, 2010 2:30 AM
  • my virtual machines were running extremely slow over the network before making this change.  remote desktop was constantly freezing up for a few seconds, the web server would take 10+ seconds to load a page.  Disabling Send Offload on the physical and virtual adapters fixed the problem.
    Monday, May 3, 2010 3:23 PM
  • Hi there, bit late but I am just starting out with Hyper-V core. Can someone explain how I would be able to configure both virtual & physical NICs, I want to try disabling offloading as outlined above. Copying a 26GB VHD from my NAS to Hyper-V has currently taken 5 hours and still going!

    Is there also any software such as Dells OpenManager to manager the servers hardware so I can look for any faults, the Server is a Dell T610?

     

    Kind regards

     

    Gary

    Friday, July 16, 2010 2:47 PM
  • Is this supposed to be fixed in Hyper-V Server 2008 R2? And could we get some more explicit instructions on how to "disabling "TCP Large Send Offload (IPv4)" on the virtual switch and "Large Send Offload v2 (IPv4)" on the physical adapters"
    Friday, October 22, 2010 4:39 AM
  • bump
    Saturday, January 8, 2011 5:48 AM
  • Hi Fred, i can tell you how to on physical, i don't know about virtual though, it could be the same way..

    Well I stumbled on this thread because i was wondering what does large send offload IP4V2 stand for, is it an enhanced ip4v1? i would like the best performance on single lan.. So now im guessing ip4V2 should be only usable for virtual machines right? there is also large send offload ip6v2 and this one is enabled by default..

    Anyway to get to your point: go to device manager expand your NIC (also in control panel network LAN) right-click on it>> properties,>> and by "advanced tab" you will notice all nic settings, scroll down and check for this large send offload ip4 and ip4v2, so now either enable or disable it.. looks like you need to disable default ip4 on virtual and enable ip4v2, while disable ip4v2 on physical.

    Friday, March 4, 2011 11:06 PM
  • Thanks The Hunter™, for the walkthrough.  Set up my first webserver in Hyper-V today and ran into this issue.  Worked liked a charm.
    Thursday, March 24, 2011 10:52 PM
  •  

    Chipping in to add my appreciation. I ran into this problem with both using Hyper-V and VMware server when testing out what technology to use for our virtual SharePoint 2010-environment.

    I saw that copying a file from my host os to the client os was happening at like 15KB/s. I googled a bit and found something about TSO (tcp segmentation offload) needs to be turned off on the host OS NIC. The Realtek  Driver calls this Large Send Offload, or LSO.  I disabled this for the adapter and now I am getting very fast transfers on my Hyper-V client machines..

    Thursday, April 14, 2011 8:55 AM
  • Bill,

    My advice is to try setting it up in a test environment and let it run for a couple of months.  It seems pretty wonky to me but as I find little tweaks along the way things get better. For example, after I disabled the LSO, I was able to actually run the Hyper-V network.  Prior to this, it wouldn't allow me to even connect.  But I get the wonky bursts where it seems to connect and disconnect every few minutes... particularly if I try to use the internet browser on the virtual... kind of like the ports are closing to allow something else through, then they reopen to allow the virtual to connect. 

    My last success happened about a week ago (I've been at it for a couple of months) when I was finally enable the virtual to accept network discovery.  Prior til then network discovery was blocked over the domain.  I still don't know why but there may have been an update and reboot on the PDC that enabled this. 

    The servers are still unable to turn on network discovery but as they are accessed on the domain via ports enabled on the firewall it isn't as big an issue as it is on the virtuals.   I am hoping I will discover more things as time goes on so that I can reach the goal you dismissed when you rolled back to 2005.  It certainly has not been overly successful.


    R, J

    Saturday, June 25, 2011 1:00 PM
  • SSvendsen,

    Would that be the physical Realtek adapter on the hyper-v machine only or are you also disabling TSO (aka LSO) on the virtual boxes as well.  I get a really wonky connection when I log into the virtuals from my laptop.  The connection works great for a few minutes, then it closes and after about 15-60 seconds (varies). it comes back.  I have the LSO disabled on the physical Hyper-V adapter (the one that is used by the virtual network manager) and also on the virtuals that reference that managed adapter.


    R, J
    Saturday, June 25, 2011 1:08 PM
  • 2 years later, I was able to get that machine to work with Hyper-V under Windows Server 2008 R2, but only after disabling everything that had anything to do with TCP or IP offloading.  I think that I had to turn off 4 settings on one controller, and 5 settings on the other controller.  Once that was done, everything worked flawlessly.  It is very apparent that the Virtual Controllers do not properly use many of the features of non-Intel devices (not all Intel devices work either), and that Hyper-V does not completely uninstall if you remove it.

    If you have any problems, start turning off the features that your adapter supports, until it starts working.  You'll need to weight the cost of replacing the controller with a fully supported controller, of which there is no list, or using your existing controller at a capacity lower than it physically supports.  This has been an elusive issue with Microsoft's Virtual Server projects since they were first in beta, so don't expect a fix anytime soon.

    P.S. For those who don't know, the settings are on the actual device and can be changed in the Device Manager.  If you are still having trouble after turning off all the settings, then you will need to replace the controller(s) with a different make/model.


    Bill Bosacker's Blog @ http://www.openSourceC.org/
    Saturday, June 25, 2011 6:43 PM
  • I had this same problem for two years as well.   I turned off everything offloading on all my NICs (after switching out old ones as well).  Everything began working immediately.   I was ready to throw out two servers!  Thank you a bunch William and Brian!
    Friday, December 30, 2011 6:29 PM
  • Thanks for the tip.  We're experiencing the same problem.

    I was wondering if you all left it disabled permanently or just a temporary deal until your transfers were done.  I read into the TCP offloading and it seemed like it would be good to have it enable as it saves your CPU processing up to 75%.


    Wednesday, February 22, 2012 4:53 PM
  • did you change the nics out on the workstations or the servers or both?

    currently we are having a 300mb file transfer from a windows 2011 server to all windows xp machines in 180 seconds BUT the same file to our only windows 7 workstation transfers in 30 seconds. The RAM, processor, etc are all the same and todays specs. 

    Friday, March 2, 2012 12:51 PM
  • None of this helped me with a similiar issue, I had to revert back to version 15.2.0.5 on my server with Broadcom 5719 and 5720 NICs. Using the latest drivers from Broadcom or dell, think it was 15.4.0.17 caused really slow network performance on the Hyper-V guests (fine for the host)
    Thursday, November 8, 2012 8:42 AM

  • Trana, the issue that you resolved with the driver downgrade correct the bursting error or what kind of slow performance? I have 2 IBM servers with Broadcom 5709C NICs, if I transfer files more than 25GB, when I get to ~ 21 GB the throughput reduces from 100 MB/s to 1 MB/s. I tried all the solutions posted here with no luck. One server has Windows 2008 R2 Enterprise x64, the other Windows 2012 Standard. Both with lastest updates, firmwares, drives, etc.
    Tuesday, November 13, 2012 5:36 AM
  • Sorry for the slow response.

    I had 1 Dell server acting as a Hyper-V server, using the latest 15.4.0.17 drivers on the physical host seemed fine, I had normal performance, however on the Hyper-V guest servers I experienced extremely bad network performance, ping times up to 200MS from the host to the guests! File copy transfer rates of 200KB-2MB/S on the LAN where I would normally expect 70-80MB/S.

    After downgrading the NIC drivers I got expected network performance on my Hyper-V guests with pings at 1MS and normal copy transfer rates.

    Host and guests are 2008 R2 SP1 Enterprise.

    But like I said, I never noticed any problems on the host itself to other servers, only the guests on that server. Still, I'd probably give it a try if I was you.


    EDIT: I should mention I have other servers with Broadcom 5709C and using the latest drivers on those just fine. I only had to use the older drivers on this server with the 5719 and 5720.
    • Edited by Trana010 Friday, November 16, 2012 12:17 AM
    Friday, November 16, 2012 12:15 AM
  •  

    Thanks to Trana010 I solve the problem in the end.

    I had a Dell R720 server with windows server 2012 std as a Hyper-v server host. This server have 4 Broadcom 5720 QP 1Gb NICs. I connected two NICs to the switch. The Hyper-v host PING G/W less than 1ms,but the Guests always greater than 20ms,my email server even 80ms,sometimes 200ms.

    I tried to disable the physical or the virtual NICs's offload option, and  it still not work.

    My NIC's driver is ver15.4.0.17, and I uninstall the NIC, Windows automatically install another version driver of 14.8.1.13, and now it work well.

    Forgive my poorEnglish! :)


    liyangbo2001

    Wednesday, March 27, 2013 6:46 AM
  • Broadcom now has driver 15.6.0.10 and it seems to have fixed this issue.
    Sunday, June 23, 2013 9:50 PM
  • Broadcom now has driver 15.6.0.10 and it seems to have fixed this issue.

    Will be attempting the install today and see if my VM to VM transfer speeds improve as well. One server shows 100mbps and other shows 1gbps connection speed for some reason !

    Sup Doc !

    Friday, October 4, 2013 6:05 PM
  • I've had a long term problem with remote desktop connections regularly dropping to all Hyper-V VM's on one of my Windows 8.1 hosts (using a Gigabyte H55M-UD2H motherboard).

    The work-around (I wouldn't call it a solution because presumably there's a driver bug somewhere) seems to have been to disable the Large Send Offload Version 2 (IPV4) setting on the Hyper-V Virtual Ethernet adapter on the host machine.

    Sunday, November 9, 2014 11:20 AM
  • Slightly different issue maybe, but internet search for my problem brought me here so ... if you have teaming and dropped connections e.g. MSTSC RDP connections dropping after SCVMM teaming; to troubleshoot, try disabling all but one of the physical adapters in the team - if that fixed the problem, then try different teaming combinations e.g. Switch Independent Hyper-V Port.


    • Edited by beroccaboy Saturday, November 15, 2014 1:59 PM
    Saturday, November 15, 2014 1:57 PM