none
Performance & Best practices HYPER-V virtual switch RRS feed

  • Question

  • Hello,

    For some time now, I have been wondering whether and how powerful the "private virtual switch" in HYPER-V is.

    Here I have a HYPER-V server configuration (SM Server H11DSU, AMD EPYC  2x 16 Core, 256 GB RAM, VM all on NVMe) with VMs in file/print server, MS SQL Server and RDS structure.

    All external users should access only via RDP to the RDS structure from an external LAN (1 GBE LAN).

    So my idea is only connect the RDS server with the external LAN (HYPER-V switch external) and all other VMs connect via HYPER-V private switch (only internal in the VMs needed). 

    The HYPER-V privat virtual switch should be a 10GBE switch - right?

    If so - the LAN speed test with the privat virtual switch are disastrous (tested with ipfer3). My privat virtual switch only achieves a throughput of 500-600 Mbps from VM to VM. So miles away from 10 GB/s.

    What can be the cause of this?

    Does anything still need to be optimized on the privat virtual switch? Or is the privat virtual switch always so bad in performance? Or is it better to use an external physical 10 GBE switch, even if I don't need it?

    Thanks for your helps!


    Danke und liebe Grüße Oliver Richter

    Wednesday, October 9, 2019 7:22 AM

All replies

  • "The HYPER-V privat virtual switch should be a 10GBE switch - right?"

    No, the private virtual switch is a memory-to-memory transfer.  The 10 gigabit description is just a description and has no bearing on the actual speed of the network.

    What sort of LAN test are you running?  If you are performing a file copy, that is a not a good test of actual performance.  Without knowledge of how you are performing your LAN test, it is hard to say why you are seeing the performance you are.


    tim

    Wednesday, October 9, 2019 1:25 PM
  • Thanks @tim

    I ran an iperf3 test from VM to VM. (https://iperf.fr/ - 64 Bit version)

    This is our standard synthetic LAN performance test. This test only ran LAN performance no storage to lan to lan to storage chain.

    If I ran this test on a physical 10 GBE LAN this performance I practically recieve:

    [ ID] Interval           Transfer     Bandwidth
    [  4]   0.00-10.00  sec  9.91 GBytes  8.51 Gbits/sec                  sender
    [  4]   0.00-10.00  sec  9.91 GBytes  8.51 Gbits/sec                  receiver

    With the "privat virtual switch" from VM to VM I realize this transfer speeds (simliar):

    [ ID] Interval           Transfer     Bandwidth
    [  4]   0.00-10.01  sec   454 MBytes   380 Mbits/sec                  sender
    [  4]   0.00-10.01  sec   454 MBytes   380 Mbits/sec                  receiver

    Very bad!



    Danke und liebe Grüße Oliver Richter

    Wednesday, October 9, 2019 4:21 PM
  • Hi,

    Thank you for posting in forum!

    >> The HYPER-V private virtual switch should be a 10GBE switch - right?

    Nothe performance of private virtual switch maybe is related disk performance and memory of VMs. As Tim said, the 10 gigabit description is just a description and has no bearing on the actual speed of the network.

    The test result with ipfer3 does not have much meaning in itself. This data cannot explain anything.

    Optimizing the VM's disk and increasing the VM's memory maybe can improve the performance of private virtual switch.

    Hope this can help you.

    Best Regards,

    Lily Yang


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact

    Thursday, October 10, 2019 9:20 AM
  • Thanks @Lily

    Unfortunately, your answer doesn't help at all. It says nothing about the true potential of the "private virtual switch" feature. It is also absolutely inexplicable to me what optimizing the VM disk would have for an effect on pure LAN switch performance. That is wrong?!

    Also, the effect of increasing the memory of the VM is not apparent to me at all. Conversely, this means that if the VM is using more memory through the processes, the LAN performance automatically decreases - what a stupid feature would that be? Example: SQL Server with 64 GB RAM (55 GB free at startup) -> 10 GBE LAN Performance -> SQL Server Service continues to use 50 GB RAM -> only 10 MBit LAN performance? There must be some specific statement or sizing policy from Microsoft!

    However, I could tell from the answer that only an external physical switch with 10 GBE can definitely provide the more reliable and predictable LAN performance.

    Can anyone say more? Thank you very much!




    Danke und liebe Grüße Oliver Richter

    Thursday, October 10, 2019 10:03 AM
  • A private switch uses shared memory on the host to pass information back and forth between VMs connected to the private switch.  I do not understand why you are seeing such poor performance on your private network.  Are you creating Generation 1 or Generation 2 VMs?  If Generation 1, are you using legacy adapters when connecting the VM?

    Generally a private virtual switch will perform better than any other network because it does not have to pass through multiple physical components to reach from source system to destination system.  As you are using a third party performance tool for testing purposes, you might want to have a chat with them about how their tool works in a virtual environment.  


    tim

    Thursday, October 10, 2019 2:01 PM
  • Thanks @Tim

    The HYPER-V is a Windows Server 2016 (not a datacenter). The VMs are all Gen. 2 and also Windows Server 2016 (no datacenter). The "private virtual switch" is integrated as a new version LAN card (not old adapter)
    The VMs (including SQL, RDS) have allocated 32 GB of RAM and 8 CPU cores.
    For all tests, no other processes are running in the VMs.
    The behavior was noticeable even with common client server SQL accesses. Therefore, we tested with the iperf3 and also found this poor LAN performance. I mean, it is not the test that is the problem, but a general problem that is confirmed by the test. We have also deleted the "private virtual switch" several times and regenerated it again - without effect on this performance issue.

    Technically, I already understand that the "private virtual switch" is only CPU and RAM operations (it can't be anything else if it has to run up pure VM to VM on a physical machine).

    What is your recommendation to test performance without a third-party tool? I don't know of any MS tool for this (except the performance monitor, which only observes and does not test).

    Can anyone say which transfer rates can be achieved in practice with the "private virtual switch"? The statements: "generally a private virtual switch will perform better than any other network" or "10 gigabit description is just a description and has no bearing" is like too spongy.  


    Danke und liebe Grüße Oliver Richter


    Friday, October 11, 2019 7:44 AM
  • What is the network configuration for the two VMs you are using in your test?

    Please provide the text output of ipconfig /all


    tim

    Friday, October 11, 2019 1:27 PM
  • (only in german):

    ETH1: LAN for external

    ETH2: privat virtual switch LAN (VM to VM) - this is the LAN that we tested for.

    Ethernet-Adapter ETH1:

       Verbindungsspezifisches DNS-Suffix: local
       IPv6-Adresse. . . . . . . . . . . : fd00:1:100:0:e56c:9685:8c8a:a8d6
       Verbindungslokale IPv6-Adresse  . : fe80::d08c:8dd2:86fa:7c36%11
       IPv4-Adresse  . . . . . . . . . . : 192.168.100.10
       Subnetzmaske  . . . . . . . . . . : 255.255.255.0
       Standardgateway . . . . . . . . . : 192.168.100.250

    Ethernet-Adapter ETH2:

       Verbindungsspezifisches DNS-Suffix: local
       IPv6-Adresse. . . . . . . . . . . : fd00:1:100:0:56cf:64cc:d62:92a7
       Verbindungslokale IPv6-Adresse  . : fe80::ac25:a898:ba97:654b%7
       IPv4-Adresse  . . . . . . . . . . : 192.168.101.10
       Subnetzmaske  . . . . . . . . . . : 255.255.255.0
       Standardgateway . . . . . . . . . : 


    Danke und liebe Grüße Oliver Richter

    16 hours 4 minutes ago