none
Hyper-V External Virtual Network vs. Jumbo MTU (9000+)...

    Question

  • The short version:  How can you enable Jumbo MTU on specific instances of the vmswitch.sys driver used by the Hyper-V Virtual Network?

    ...

    The long version:  Here is the scenario - Testing iSCSI volumes on a 2008 Hyper-V host w/2003 R2 SP2 x64 guest...  Works great on the host; until I want to mount iSCSI volumes from the guest.  For the purposes of this example; here is a simplified network layout:

    NIC A, B and C.  NIC A (actually a load balanced team, but not important again this is simplified) is used for the LAN network.  NIC B and C are used for the iSCSI network.

    So w/no virtual network setup; the host would have:

    NICA - 192.168.0.1/24 - MTU 1500
    NICB - 10.0.0.1/8 - MTU 9000
    NICC - 10.0.0.2/8 - MTU 9000

    Using 10.0.0.1 and 10.0.0.2 to initiate iSCSI connection w/MPIO support.

    ...

    Now we try to add guest support for - because the iSCSI disk has some snapshot ability that works with VSS to give more flexibility for recovery - if the data on the disk is actually the host application data (like SQL, Exchange, etc) not a vmdk file. 

    Add a Virtual "external" Network to NIC A...  Keeps the IP address an the MTU of 1500; as it should...  but when you add a virtual netowrk to NIC B and C the physical adapter; with a driver that has the MTU set to 9000 is bound only to the virtual network switch protocol, and the new drivers, VNA, VNB and VNC use that vmswitch.sys driver which doesn't have the ability to set the MTU...  just some checksum offload properties.  So we have:

    VNA - 192.168.0.1/24 - MTU 1500
    VNB - 10.0.0.1/8 - MTU 1500
    VNC - 10.0.0.2/8 - MTU 1500

    Which means both the 2008 Hyper-V host; and the guests must connect iSCSI volumes with standard packet size; which is a major performance hit.

    All I can find on the topic is about the classes here:

    http://msdn2.microsoft.com/en-us/library/cc136912(VS.85).aspx
    http://msdn2.microsoft.com/en-us/library/cc136835(VS.85).aspx
    http://msdn2.microsoft.com/en-us/library/cc136838(VS.85).aspx

    Which says something like this for the synthetic and emulated ethernet port classes:

    The active or negotiated maximum transmission unit (MTU) that can be supported. This property is inherited from CIM_NetworkPort and is always set to 1500.

    Which has me worried...  Anyone have insight into this?
    Thursday, April 17, 2008 2:12 PM

Answers

  • Ok; per this post:

    http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=2988499&SiteID=17

    Disabling Large Send Offload for IPv4 on the physical and virtual network switch drivers seemed to resolve the traffic not passing at all part...  so the iSCSI traffic is working in a virtual machine; yay.

    ...

    So back to the original question - how do you turn that MTU up to 9000 on the vmswitch.sys Virtual Network drivers?

    Anyone?

    Thanks!
    Friday, April 18, 2008 8:37 PM
  • For Vista SP1 (guests) and Windows 2008 (including the hyper-v host computer) this seems to work:

    netsh interface ipv4 set subinterface "Local Area Connection 5" mtu=9000 store=persistent
    netsh interface ipv4 set subinterface "Local Area Connection 6" mtu=9000 store=persistent

    Where "Local Area Connection 5" and "Local Area Connection 6" are the adapters created by the virtual network manager.

    ...

    It seems like Routing and Remote Access will need to be installed on my W2K3R2 x64 SP2 guest machine in order to do the same thing there...  It would be a lot nicer if it was part of the VMBus network driver options. 

    Anyone have another idea before I go try that as well?
    Wednesday, April 23, 2008 5:24 PM

All replies

  • More testing... 

    I havn't attempted to "go all the way" with this; since my initial tests aren't looking good.  So with _one_ of the network cards switched over to the Virtual Network on the host 2008 machine; I can initiate an iSCSI logon; which is confirmed on the client and the disk and uses MTU 1500...  However; it doesn't seem that the iSCSI initiator is compatible with this configuration anyway...  While it shows a connection to the target is successful; when I try to actually move data I get a list of errors:

    Source: iScsiPrt

    First we see a few of these - Event ID: 9 - Target did not respond in time for a SCSI request.
    Then one of these -  Event ID: 63 - Can not Reset the Target or LUN.
    Followed by a bunch of these - Event ID: 7 - The initiator could not send an iSCSI PDU.
    A few others come and go until you get to this - Event ID: 34 - A connection to the target was lost, but Initiator successfully reconnected to the target.

    and repeat...

    I have kept NIC C off the Virtual Network so that once I cancel the login from the card I'm testing; the iSCSI traffic has a path that works and it's able to write the data with out causing file system problems...  But with the total failure of this test; I'm less egar to now try it on both adapters - maybe even throw a reboot in there to see if the iSCSI layer needs that to function - and see what happens.

    So; before I really screw things up - I'll ask the dumb question - is this configuration that I'm trying to make work supposed to work?  Smile

    Thanks.
    Thursday, April 17, 2008 4:15 PM
  • Ok; per this post:

    http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=2988499&SiteID=17

    Disabling Large Send Offload for IPv4 on the physical and virtual network switch drivers seemed to resolve the traffic not passing at all part...  so the iSCSI traffic is working in a virtual machine; yay.

    ...

    So back to the original question - how do you turn that MTU up to 9000 on the vmswitch.sys Virtual Network drivers?

    Anyone?

    Thanks!
    Friday, April 18, 2008 8:37 PM
  • Alright; I'll stop answering my own question here soon...  Sad

    If it's not possible to set MTU = 9000 with the vmswitch.sys driver that is in RC0 (was hoping for a registry hack) - can someone from MS at least confirm that this feature is planned for a future RC or the RTM version?

    Thanks again...
    Monday, April 21, 2008 1:55 PM
  • For Vista SP1 (guests) and Windows 2008 (including the hyper-v host computer) this seems to work:

    netsh interface ipv4 set subinterface "Local Area Connection 5" mtu=9000 store=persistent
    netsh interface ipv4 set subinterface "Local Area Connection 6" mtu=9000 store=persistent

    Where "Local Area Connection 5" and "Local Area Connection 6" are the adapters created by the virtual network manager.

    ...

    It seems like Routing and Remote Access will need to be installed on my W2K3R2 x64 SP2 guest machine in order to do the same thing there...  It would be a lot nicer if it was part of the VMBus network driver options. 

    Anyone have another idea before I go try that as well?
    Wednesday, April 23, 2008 5:24 PM
  •  

    Ethan,

     

    I have been following your thread and wondering the same thing about those two issues:

    TCP Offload

    Jumbo Frames

     

    I have multiple clients on a host that will be wanting to use explicit physical NICs (albeit routed through the virtual switch) to the iSCSI SAN.  My goal/expectation is that I install the proper driver that supports both Jumbo Frames and Offload on the physical (see below) and just let the vNIC handle large frames.

     

    I recognize it's not probably the place for the vNIC to be doing offload - I got that.  I'm concerned about the vNIC getting an MTU of 9000 for the same purpose (iSCSI).  Does the physical host NIC that you disabled the offload from actually support it?  I'm going to be giving it a whirl when Alacritech drops their 9.3 release for Win2008 (should be coming shortly) which supports Large Frames.

     

    Anyone from MS have any comment on what we're trying to do here?

     

    Thanks,

    Mark

     

    Wednesday, April 23, 2008 9:54 PM
  • Yes...  I've tested this with an Intel PRO/1000 PT Dual Port PCIe adapter that IBM sells - unsing their published 2008 certified driver version 9.12.17.0 (date 2/6/2008) and also tested it with the onboard Broadcom BCM5708C NetXtreme II adapters and their published 2008 certified driver version 4.1.3.0 (date 1/10/2008)...

    In the physical driver - both support Jumbo MTU (9014 for Intel and 9000 for Broadcom) and Large Send Offload (IPv4 and IPv6 options seperate for Intel, single LSO option for Broadcom).

    I currently have LSO disabled on both physical adapter drivers; and on the MS vNIC drivers.  It doesn't matter what the MTU is set for the physcal driver, when the MS vNIC drivers are installed the virtual switch operates at 1500 MTU...  So I used the 'netsh' command to enable MTU=9014 and MTU=9000 on each respective "External" vNIC driver on the 2008 Hyper-V host that was used for the iSCSI network.  In my limited and very unscientific testing this does appear to have the 10% gain in CPU overhead that I'd expect with these regular non-HBA server network cards. 

    The guest OS's that have the VMBus network driver installed are on their own to set the MTU however.  So even with it working for iSCSI connections on the 2008 Hyper-V host computer; a guest that needs an iSCSI connection via those External virtual network connections will still initiate at MTU=1500 with out additonal tweaking on the guest OS.  Which is why I keep thinking that it would be nice if the vmswitch.sys driver was a little more MTU aware and passed the settings from the Physical to the Virtual on the Hyper-V host and from the Virtual on the host to the Synthetic on the guest.
    Thursday, April 24, 2008 3:10 PM
  • When you think about it, it has to process the incoming block data with an MTU 1500 roughly 6 times more than a MTU 9000 to deal with TCP/IP information off the data.  I would like someone from MS to state whether or not offload on the physical NIC is really going to do anything or is the virtual switch actually just taking the raw data off the wire and handing it to the vNIC, where the guest is having to process it *again*?  If that's the case, having TOE nics in Hyper-V is a complete waste of money, because the guest will never see the light of day with regards to the hardware, and the physical driver isn't doing anything except acting like a switch.

     

    Have you done any perf counters on CPU usage with a 1500 vs 9000 to see the impact - I would expect to see it decrease for both host and guest at 9000 by orders of magnitude?  Try transfering a GB size file to your iSCSI target and take some readings...

     

    Ben, anyone?

     

    Mark

    Thursday, April 24, 2008 4:04 PM
  •  SEI Support wrote:
     

    Have you done any perf counters on CPU usage with a 1500 vs 9000 to see the impact - I would expect to see it decrease for both host and guest at 9000 by orders of magnitude?  Try transfering a GB size file to your iSCSI target and take some readings...


    Like I said - I've only looked at unscientific methods so far...  I did some copying of large ( 5GB ) files before and after the MTU change using the netsh command - but I didn't use perf counters; just eyeballed it...  seemed like average CPU use went down 10% and sustained transfer rates stayed around 102-108MB/s which is all I would expect from this iSCSI unit talking to a host w/2 GBE network adapters.  For comparison a 32-Bit W2k3SP2 machine w/2 GBE adapters sees 108-124MB/s sustained transfer rates to the same unit.

    Thursday, April 24, 2008 7:58 PM
  • OK, Learned some interesting things talking with my friend at Alacritech.  Beyond getting their Win2008 driver coming along nicely (new one for Jumbo frames), I learned that MS is NOT intending to do any offload in the first release.  In vNext of Hyper-V, they've been throwing around the "idea" of building an API for the synthetic driver to call for offloading from the guest to the root partition!  We need to make sure that MS isn't leaving us TOE folks out to dry - 10Gig is coming and I can only imagine the poop it's going to dump on my CPU for handling frames... 

     

    Thanks for the update!

    Mark

     

    Sunday, April 27, 2008 4:13 AM