none
Performance & Best practices HYPER-V virtual switch RRS feed

  • Question

  • Hello,

    For some time now, I have been wondering whether and how powerful the "private virtual switch" in HYPER-V is.

    Here I have a HYPER-V server configuration (SM Server H11DSU, AMD EPYC  2x 16 Core, 256 GB RAM, VM all on NVMe) with VMs in file/print server, MS SQL Server and RDS structure.

    All external users should access only via RDP to the RDS structure from an external LAN (1 GBE LAN).

    So my idea is only connect the RDS server with the external LAN (HYPER-V switch external) and all other VMs connect via HYPER-V private switch (only internal in the VMs needed). 

    The HYPER-V privat virtual switch should be a 10GBE switch - right?

    If so - the LAN speed test with the privat virtual switch are disastrous (tested with ipfer3). My privat virtual switch only achieves a throughput of 500-600 Mbps from VM to VM. So miles away from 10 GB/s.

    What can be the cause of this?

    Does anything still need to be optimized on the privat virtual switch? Or is the privat virtual switch always so bad in performance? Or is it better to use an external physical 10 GBE switch, even if I don't need it?

    Thanks for your helps!


    Danke und liebe Grüße Oliver Richter

    Wednesday, October 9, 2019 7:22 AM

Answers

  • Hi

    Im sorry I don’t have any further suggestions for you. Maybe as Tim said, you can open a support case with Microsoft to help you work through it.

    Best regards,

    Lily


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact

    • Marked as answer by Oliver Richter Thursday, October 24, 2019 7:24 AM
    Thursday, October 24, 2019 1:51 AM

All replies

  • "The HYPER-V privat virtual switch should be a 10GBE switch - right?"

    No, the private virtual switch is a memory-to-memory transfer.  The 10 gigabit description is just a description and has no bearing on the actual speed of the network.

    What sort of LAN test are you running?  If you are performing a file copy, that is a not a good test of actual performance.  Without knowledge of how you are performing your LAN test, it is hard to say why you are seeing the performance you are.


    tim

    Wednesday, October 9, 2019 1:25 PM
  • Thanks @tim

    I ran an iperf3 test from VM to VM. (https://iperf.fr/ - 64 Bit version)

    This is our standard synthetic LAN performance test. This test only ran LAN performance no storage to lan to lan to storage chain.

    If I ran this test on a physical 10 GBE LAN this performance I practically recieve:

    [ ID] Interval           Transfer     Bandwidth
    [  4]   0.00-10.00  sec  9.91 GBytes  8.51 Gbits/sec                  sender
    [  4]   0.00-10.00  sec  9.91 GBytes  8.51 Gbits/sec                  receiver

    With the "privat virtual switch" from VM to VM I realize this transfer speeds (simliar):

    [ ID] Interval           Transfer     Bandwidth
    [  4]   0.00-10.01  sec   454 MBytes   380 Mbits/sec                  sender
    [  4]   0.00-10.01  sec   454 MBytes   380 Mbits/sec                  receiver

    Very bad!



    Danke und liebe Grüße Oliver Richter

    Wednesday, October 9, 2019 4:21 PM
  • Hi,

    Thank you for posting in forum!

    >> The HYPER-V private virtual switch should be a 10GBE switch - right?

    Nothe performance of private virtual switch maybe is related disk performance and memory of VMs. As Tim said, the 10 gigabit description is just a description and has no bearing on the actual speed of the network.

    The test result with ipfer3 does not have much meaning in itself. This data cannot explain anything.

    Optimizing the VM's disk and increasing the VM's memory maybe can improve the performance of private virtual switch.

    Hope this can help you.

    Best Regards,

    Lily Yang


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact

    Thursday, October 10, 2019 9:20 AM
  • Thanks @Lily

    Unfortunately, your answer doesn't help at all. It says nothing about the true potential of the "private virtual switch" feature. It is also absolutely inexplicable to me what optimizing the VM disk would have for an effect on pure LAN switch performance. That is wrong?!

    Also, the effect of increasing the memory of the VM is not apparent to me at all. Conversely, this means that if the VM is using more memory through the processes, the LAN performance automatically decreases - what a stupid feature would that be? Example: SQL Server with 64 GB RAM (55 GB free at startup) -> 10 GBE LAN Performance -> SQL Server Service continues to use 50 GB RAM -> only 10 MBit LAN performance? There must be some specific statement or sizing policy from Microsoft!

    However, I could tell from the answer that only an external physical switch with 10 GBE can definitely provide the more reliable and predictable LAN performance.

    Can anyone say more? Thank you very much!




    Danke und liebe Grüße Oliver Richter

    Thursday, October 10, 2019 10:03 AM
  • A private switch uses shared memory on the host to pass information back and forth between VMs connected to the private switch.  I do not understand why you are seeing such poor performance on your private network.  Are you creating Generation 1 or Generation 2 VMs?  If Generation 1, are you using legacy adapters when connecting the VM?

    Generally a private virtual switch will perform better than any other network because it does not have to pass through multiple physical components to reach from source system to destination system.  As you are using a third party performance tool for testing purposes, you might want to have a chat with them about how their tool works in a virtual environment.  


    tim

    Thursday, October 10, 2019 2:01 PM
  • Thanks @Tim

    The HYPER-V is a Windows Server 2016 (not a datacenter). The VMs are all Gen. 2 and also Windows Server 2016 (no datacenter). The "private virtual switch" is integrated as a new version LAN card (not old adapter)
    The VMs (including SQL, RDS) have allocated 32 GB of RAM and 8 CPU cores.
    For all tests, no other processes are running in the VMs.
    The behavior was noticeable even with common client server SQL accesses. Therefore, we tested with the iperf3 and also found this poor LAN performance. I mean, it is not the test that is the problem, but a general problem that is confirmed by the test. We have also deleted the "private virtual switch" several times and regenerated it again - without effect on this performance issue.

    Technically, I already understand that the "private virtual switch" is only CPU and RAM operations (it can't be anything else if it has to run up pure VM to VM on a physical machine).

    What is your recommendation to test performance without a third-party tool? I don't know of any MS tool for this (except the performance monitor, which only observes and does not test).

    Can anyone say which transfer rates can be achieved in practice with the "private virtual switch"? The statements: "generally a private virtual switch will perform better than any other network" or "10 gigabit description is just a description and has no bearing" is like too spongy.  


    Danke und liebe Grüße Oliver Richter


    Friday, October 11, 2019 7:44 AM
  • What is the network configuration for the two VMs you are using in your test?

    Please provide the text output of ipconfig /all


    tim

    Friday, October 11, 2019 1:27 PM
  • (only in german):

    ETH1: LAN for external

    ETH2: privat virtual switch LAN (VM to VM) - this is the LAN that we tested for.

    VM1:

    Ethernet-Adapter ETH1:

       Verbindungsspezifisches DNS-Suffix: local
       IPv6-Adresse. . . . . . . . . . . : fd00:1:100:0:e56c:9685:8c8a:a8d6
       Verbindungslokale IPv6-Adresse  . : fe80::d08c:8dd2:86fa:7c36%11
       IPv4-Adresse  . . . . . . . . . . : 192.168.100.10
       Subnetzmaske  . . . . . . . . . . : 255.255.255.0
       Standardgateway . . . . . . . . . : 192.168.100.250

    Ethernet-Adapter ETH2:

       Verbindungsspezifisches DNS-Suffix: local
       IPv6-Adresse. . . . . . . . . . . : fd00:1:100:0:56cf:64cc:d62:92a7
       Verbindungslokale IPv6-Adresse  . : fe80::ac25:a898:ba97:654b%7
       IPv4-Adresse  . . . . . . . . . . : 192.168.101.10
       Subnetzmaske  . . . . . . . . . . : 255.255.255.0
       Standardgateway . . . . . . . . . : 

    VM2:

    Ethernet-Adapter ETH1:

       Verbindungsspezifisches DNS-Suffix: local
       IPv6-Adresse. . . . . . . . . . . : fd00:1:100:0:f321:7761:ccba:bed6
       Verbindungslokale IPv6-Adresse  . : fe80::51d0:945a:2d1:2e26%19
       IPv4-Adresse  . . . . . . . . . . : 192.168.100.11
       Subnetzmaske  . . . . . . . . . . : 255.255.255.0
       Standardgateway . . . . . . . . . : 192.168.100.250

    Ethernet-Adapter ETH2:

       Verbindungsspezifisches DNS-Suffix: local
       IPv6-Adresse. . . . . . . . . . . : fd00:1:100:0:b8cf:ae6b:c2ae:69d0
       Verbindungslokale IPv6-Adresse  . : fe80::bd30:59c2:77af:4b64%15
       IPv4-Adresse  . . . . . . . . . . : 192.168.101.11
       Subnetzmaske  . . . . . . . . . . : 255.255.255.0
       Standardgateway . . . . . . . . . : 



    • Edited by Oliver Richter Tuesday, October 15, 2019 5:16 PM two VMs config
    Monday, October 14, 2019 3:01 PM
  • Is what you provided for a single VM?  I asked for the configuration for the two VMs that are being tested.

    tim

    Tuesday, October 15, 2019 3:55 PM
  • Hello @tim

    I've updatet the last post.


    Danke und liebe Grüße Oliver Richter

    Tuesday, October 15, 2019 5:17 PM
  • Were both networks available during your test?  Did you try the test with just the network on the private switch available?

    tim

    Wednesday, October 16, 2019 1:58 PM
  • I did the test as well.
    First test the LAN to external (ETH1) and then tested the private virtual switch (ETH2) and also the ETH1 off (deactivated) and only the ETH2 tested. VM1 is an MS SQL Server (with the DBs) and VM2 is an RDS server with the client application that accesses the MS SQL DBs.
    With the ipfer3 we then tested specifically from VM2 via ETH2 to the IP of the ETH2 of the VM1. We use the same procedure for pure physical machines. At 10 GBE, something close to 8.5-9.9 Gbps can be measured practically. 

    Danke und liebe Grüße Oliver Richter

    Thursday, October 17, 2019 9:24 AM
  • I'm at a loss as to why you are seeing that difference in performance.  Maybe someone else will jump in with some ideas.  Or you can open a support case with Microsoft to help you work through it.


    tim

    Thursday, October 17, 2019 2:00 PM
  • Hi Tim,

    first of all, thank you.

    For me it would be interesting to know at least once what performance the "private virtual switch" can bring in practice.

    If you say that in your cases you can always get to 10 Gbps transfer rate, then I have to do something. If you say that you only get to 1-2 Gbps transfer rate, then with the "private virutelle switch" anyway would not bring a solution, then I would want to use an external 10GB-25GB switch. Can no one give me practically feasible transfer rates?


    Danke und liebe Grüße Oliver Richter

    Friday, October 18, 2019 6:57 AM
  • Hi

    Im sorry I don’t have any further suggestions for you. Maybe as Tim said, you can open a support case with Microsoft to help you work through it.

    Best regards,

    Lily


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact

    • Marked as answer by Oliver Richter Thursday, October 24, 2019 7:24 AM
    Thursday, October 24, 2019 1:51 AM