none
Network interfaces for Hyper-V - 2x40G or 4x10G

    Question

  • Dear Experts !

    I am getting ready to build a 10 node Hyper-V cluster with Windows Server 2016. Its going to be on blade and each blade server has 2x40GB Interface for network and 2x10G interface for storage traffic (iscsi). But an alternate option OEM suggested is to go with 4x10G for network and 2x10G for storage.

    If I choose 2x40G, Since I have only 2 physical interfaces on the server, which are getting connected through two different Top of the Rack switches - Need teaming to achieve availability. 

    On top of the teamed virtual interface - Need to build virtual interface for Management, Hyper-V Data traffic and Live-migration with specific VLANs.

    Is this approach good? Is there a way to limit bandwidth on the virtual interface? 

    Or the traditional segregation of physical network interfaces is still the right approach?

    Appreciate if you can provide any documentation/guidance/suggestions on this.

    Cheers !

    Shaba


    Optimism is the faith that leads to achievement. Nothing can be done without hope and confidence.


    InsideVirtualization.com

    Thursday, January 10, 2019 4:34 AM

All replies

  • Would definitely go with converged networking, so teaming your adapters and creating virtual adapters for your Hosts. Are your network cards RDMA-capable? 

    Microsoft Certified Professional

    [If a post helps to resolve your issue, please click the "Mark as Answer" of that post or click Answered "Vote as helpful" button of that post. By marking a post as Answered or Helpful, you help others find the answer faster. ]

    Thursday, January 10, 2019 7:01 AM
  • I agree with Matej. Either way you're going to team pairs of NICs for redundancy. So 2x 40 will get 40gb of bandwidth and 4x 10 will only get you 20gb (assuming you create 2 pairs of paired NICs). Is the 10G option significantly cheaper? Cheap enough to add more nodes? Otherwise why pick the slower option?
    Thursday, January 10, 2019 3:07 PM
  • Thanks Matej & D.Pope.

    On bandwidth wise, 2x40G is attractive. But the traditional infra I had done earlier is all with dedicated network interface for Management, HyperV-DataNetwork, LiveMigration etc. Now with the option of 2x40G, Its all going to be virtual NIC ontop of  the teamed network interface.

    Matej -  With respect to the question on RDMA - Yes, Supports RDMA.

    Cheers !

    Shaba


    Optimism is the faith that leads to achievement. Nothing can be done without hope and confidence.


    InsideVirtualization.com

    Friday, January 11, 2019 5:59 AM
  • I was asking for RDMA support because it's highly recommended and it allows you to configure Quality of Service (QoS) to limit/control bandwidth on your virtual adapters. 

    Here is a good start

    https://docs.microsoft.com/en-us/windows-server/networking/technologies/conv-nic/cnic-datacenter



    Microsoft Certified Professional

    [If a post helps to resolve your issue, please click the "Mark as Answer" of that post or click Answered"Vote as helpful" button of that post. By marking a post as Answered or Helpful, you help others find the answer faster. ]


    Friday, January 11, 2019 6:30 AM