none
Internal Virtual Switch performance issue

    Question

  • I have created an internal vSwitch.   The core server is Windows 2016 but I see this on 2012 R2 also.  There are two Hyper-V instances of Server 2012 R2 that use the internal switch.  The two VM's can communicate with each other, there is no physical adapter involved.

    The problem is that the speed seems to be capped at 1GB/sec.  I would assume an internal adapter not bound to a physical adapter would be capable of achieving speeds in excess of 1GB/sec.  

    Is there a setting that will remove the 1gb/sec cap on the internal vSwitch?


    Saturday, February 11, 2017 9:12 PM

All replies

  • http://www.altaro.com/hyper-v/why-do-hyper-v-virtual-adapters-show-10gbps/
    Saturday, February 11, 2017 9:59 PM
  • That doesn't seem to apply.   That article specifically references a vSwitch with an attached physical adapter.

    There is no physical adapter involved.

    Saturday, February 11, 2017 10:12 PM
  • Hi,

    >>The problem is that the speed seems to be capped at 1GB/sec.

    The speed showing in the VM is not related to the actual transferring speed. The speed is related to the hardware performance of hardware.

    >>Is there a setting that will remove the 1gb/sec cap on the internal vSwitch?

    I'm afraid not.

    Here is the reference of hyper-v network:

    https://technet.microsoft.com/en-us/library/jj945275.aspx

    Best Regards,

    Leo


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.


    Monday, February 13, 2017 1:52 AM
    Moderator
  • I find it difficult to believe the performance of a hardware-less internal software only vSwitch has a hardware limit of 1GB/sec

    What is limiting the speed since there is no physical adapter installed?

    Monday, February 13, 2017 7:39 AM
  • How are you measuring performance?  You are correct, there is no hardware limit of 1Gbps.  When using an internal virtual switch, transfers are accomplished via memory of the host.  But not all network performance measuring methods are created equal, and you have not provided any information on how you are obtaining your 1Gbps.

    . : | : . : | : . tim

    Monday, February 13, 2017 1:24 PM
  • Wow this is crazy.   

    I use iperf.  I get 10.2GB/sec now.  It was limited to 1gb/sec last week.

    PS C:\iperf> .\iperf3.exe -s -i 1
    -----------------------------------------------------------
    Server listening on 5201
    -----------------------------------------------------------
    Accepted connection from 10.200.200.93, port 50179
    [  5] local 10.200.200.92 port 5201 connected to 10.200.200.93 port 50180
    [ ID] Interval           Transfer     Bandwidth
    [  5]   0.00-1.00   sec  1.08 GBytes  9.26 Gbits/sec
    [  5]   1.00-2.00   sec  1.18 GBytes  10.2 Gbits/sec
    [  5]   2.00-3.00   sec  1.19 GBytes  10.3 Gbits/sec
    [  5]   3.00-4.00   sec  1.22 GBytes  10.4 Gbits/sec
    [  5]   4.00-5.00   sec  1.19 GBytes  10.2 Gbits/sec
    [  5]   5.00-6.00   sec  1.19 GBytes  10.2 Gbits/sec
    [  5]   6.00-7.00   sec  1.19 GBytes  10.2 Gbits/sec
    [  5]   7.00-8.00   sec  1.15 GBytes  9.86 Gbits/sec
    [  5]   8.00-9.00   sec  1.18 GBytes  10.1 Gbits/sec
    [  5]   9.00-10.00  sec  1.20 GBytes  10.3 Gbits/sec
    [  5]  10.00-10.00  sec  1.25 MBytes  8.70 Gbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth
    [  5]   0.00-10.00  sec  0.00 Bytes  0.00 bits/sec                  sender
    [  5]   0.00-10.00  sec  11.8 GBytes  10.1 Gbits/sec                  receiver



    I also tested transferring files directly via the IP using explorer and last week it was transferring at about 111mb/sec and today its from 600-1.4gb/sec, its hard to tell there is a lot of memory caching going on and possibly some raid delays skewing the performance of the file transfer.

    I have changed nothing except a few reboots.

    This is still a bit odd, I have read that regardless of the reported interface speed, internal vSwitch software only interfaces have no limit and would theoretically transfer as fast as the processor can pass the packets but that is clearly not happening as now the interface is capped at 10G.

    Monday, February 13, 2017 8:07 PM
  •  Try a few more reboots. Perhaps you will get 100G!

       Why do you assume that whatever figure you see is the result of some (non-existent) built-in limit rather than the varying load on your system?

     


    Bill

    Monday, February 13, 2017 10:08 PM
  • Hi,

    Are there any updates?

    You could mark the reply as answer if it is helpful. 

    Best Regards,

    Leo


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    Thursday, March 9, 2017 12:28 PM
    Moderator