none
Server 2016 vs Server 2016 v1709 - Nic Teaming VM Switch Issue - Bug? RRS feed

  • Question

  • Hi There everybody, and thanks in advance for any advice but I've search high and low to no avail.

    I have 2 Identical HP DL380 G7 servers. Same spec CPU/Memory/Disks/Networking etc all patched with latest BIOS/ILO/Backplane/Drives firmware etc.

    Yes, I know they are "old" but please put that to one side for now.

    1st server I have installed Windows 2016 Server with GUI (v10.0.14393) with all windows updates and installed the Hyper-V role and rebooted.

    1st Server running Windows Server 2016 full GUI (not v1709)

    2nd server I have installed Windows 2016 Server v1709 (no GUI - Core only) with all windows updates and install the Hyper-V role and rebooted.

    Windows 2016 Server v1709 (Server Core Only)

    I should point out that both these servers have been running Windows 2012 R2 for 3 years or so with no issues whatsoever with the same configuration I'm about to explain below as we want to upgrade them to 2016 (or 2016 v1709 ideally)

    Each server has a built in Quad Port Nic card and we use 2 of these NICS in a "Team" utilising the built in NIC teaming functionality of Windows Server 2016 and 2016 v1709 on each server respectively. (Switch Independent / Dynamic / No Standby Adaptor)

    Here is the team config on the 2016 Server with full GUI (not v1709):

    And below is the same configuration on the 2016 Server v1709 (Core) using Remote Management in Server Manager to configure Nic Teaming (quite capable of powershell but for ease of understanding I've pulled this from the GUI on a remote server)

    Now, if your still with me then I'll continue.

    We have multiple VLANS in our org and our VM's across both our servers need access to these VLANS via this physical NIC team we just created above. So in Hyper-V Manager we create an External Switch and "bind" it to the physical "Teamed" Nic that is created based on the above config. The first one is generally called the "Microsoft Network Adaptor Multiplexor Driver" in Network & Adaptor Settings once the team is established and in Hyper-V Manager you choose that 'Network Adaptor' for your VSwitch to be bound too.

    So we do this on both servers. So far so good.

    Now, once we have created our additional VLAN Team Interfaces in NIC Teaming in Server Manager or via Powershell you end up with multiple "Microsoft Network Adapter Multiplexor Driver 2/3/4/5 and so on depending how many teamed interfaces you created. We then create a VSwitch bound to each one which can then be bound to our NIC's in our VMs. Remember , this all works fine in 2012 R2. Here is an example of a second Teamed Interface on the same physical team but on a different VLAN (51 in this case)


    So here is the crux. In 2016 Server (not v1709) after we create the first VSwitch bound to the first Teamed Interface "Microsoft Network Adapter Multiplexor Driver" we can go back into Hyper-V Virtual Switch Manager and create our second VSwitch and bind it to "Microsoft Network Adapter Multiplexor Driver 2" (which in NIC teaming is linked to only work with traffic tagged on VLAN 51 in this example). This is created and works fine.

    BUT in Windows Server 2016 v1709, if you go and try to add your SECOND VSwitch in Hyper-V Virtual Switch Manager bound to "Microsoft Network Adapter Multiplexor Driver 2/3/4 etc" it FAILS with "Failed while adding Virtual Ethernet Switch connections". 

    I should point out that if we do this via Powershell directly on the console of the 2016 v1709 server, we get the same error (as we normally would give that Server 2016 v1709 is Server Core only).

    So we are stumped.. in 2016 and 2012 R2 this is fine, and we can have our VM's in whatever VLAN's we want by making sure those vNICs in the VM's are bound to whatever VSwitch which in turn is bound to the Teamed Nic, but in 2016 v1709 it just fails and we've tried this by reinstalling both servers with 2016 full GUI and 2016 v1709 both with the same results. (in case your wondering, the 2 switches the 2 physical nics are connected too allow all the relevant different VLAN Tagged traffic through appropriately).

    I think this is a bug in v1709 and we've tried to look at NVSPBIND etc to look for differences but no joy... wondering if anyone can try to recreate this issue!

    Thanks in advance for any advice

    Andy


    Andy

    Friday, June 22, 2018 1:31 PM

All replies

  • Good job explaining what you're doing, thank you for that. What I'd like to see is a justification for this build. Generally, I don't ask people why they want to do things because I figure they have their reasons, but this is quite over-engineered and I don't see any worthwhile payout. Basically, if someone could wave a magic wand and make the barrier go away, what do you gain over a simpler design? Multiple virtual switches cause a fair amount of extra processing overhead on their own. Beyond that, you've stacked additional processing work onto the team, some of which will be duplicated by the virtual switch every time it processes a frame. The only positive outcome that I can see from this build working as you designed it is the convenience of not typing a VLAN ID into a vNIC configuration/PowerShell cmdlet. Is it really that important, or is there some other benefit?

    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    Friday, June 22, 2018 3:19 PM