none
Virtual Switch Setup Question

    Question

  • Hi,

    I was wondering how I should setup my virtual switch in a 2016 DC Edition hyper-v cluster (10 hosts) using failover clustering. I am using Hyper-V Manager at the moment.

    I already have 2 8 gig FC cards dedicated to the storage so the remaining ports I have available are...

    2 10 Gig E (currently teamed)

    4 1 Gig E (I don't want to use all four if possible)

    * Currently host, Cluster, VM's are all using the same VLAN. I can separate these by using separate VLANs if needed.

    I currently have all (except storage) of the different functions using the teamed 10 Gig ports and it's working just fine, but I guess best practices say to split the different functions (live migration, cluster, management, VM guest traffic) up. 

    Would it be OK to just split the functions up using separate VLANs/internal/external switches and have any new VLANs trunked to the teamed 10 Gig ports? I would say 50-60% of the VM traffic will stay within the same 10 host cluster. Also live migrations and or failover will be with the same 10 hosts.

    I come from a VMware background and would normally just create one or two virtual switches with multiple port groups/separate VLANs and use 2 10 Gig E for everything except the storage.

    Just looking for some advice/feedback from the community. : )

    Thanks

    Thursday, August 2, 2018 4:13 PM

Answers

  • You have a lot of interrelated questions and they have interrelated answers, so rather than hit them each individually, I will tackle the entire problem holistically in narrative fashion.

    Let's kick it off with a look at the "Big Picture" problems that you need to solve:

    • You need the VMs to be able to work independently from each host in your cluster
    • You need each node in the cluster to correctly operate as a member of the cluster

    I make these distinctions because Hyper-V and Failover Clustering are two separate things. They mesh well, but they are two separate things. This is important.

    First, you need Hyper-V hosts that work independently. If you don't have that, then your VMs can't work independently when they are member nodes, right?

    So, to satisfy that, you need a Hyper-V switch. You can:

    • Make a team and then put a Hyper-V virtual switch on it. That gives you access to all teaming and load balancing modes at the expense of greater management overhead
    • Create a SET. That restricts you to switch independent mode.

    Truthfully, I think the world has been moving away from LACP for a while. I was a proponent of it when Microsoft introduced it with 2012 but I've since become rather disillusioned. It just doesn't work consistently enough with other vendors and everyone blames everyone else for that. I now use switch independent mode even when I don't use SET. The good news is that it doesn't really hurt anything performance-wise, so you're not losing out.

    Whichever way you choose, you get a virtual switch. As for physical ports, I want to reiterate that 1G and 10G don't mix well and 1G is so outclassed by 10G that it's a waste of your time to try. Kill your 1Gs and use only your 10Gs.

    So, now you've got a virtual switch/team and no other outside connectivity, right? So you need a vNIC in the management OS for the management OS to use. That's aptly enough what we call the "Management" vNIC. That's the one that needs a full IP setup. What network/VLAN? That's up to you. In smaller networks, there is typically only one IP network for all of the servers to use, so I would put it in that network/VLAN. In larger networks, there is commonly one (or more) network just for virtualization servers to use, so I'd put it in that VLAN. Do you need a separate network? I can't answer that for you. Isolation has benefits, but it brings routing and firewalling and other things. If you're not invested enough to do those things fully and carefully, then it's not worth it.

    When you create VMs, you'll stick them into a VLAN or leave them untagged, whatever suits you. Do they need to be in the same network/VLAN as the management operating system? If you want the management operating system to talk to the VMs without using a router, then it's best to put the Management vNIC into the same network/VLAN as the VMs. If you want the VMs to be able to talk to the management OS through a router/firewall, then it's best to put the Management vNIC into a different network/VLAN. Of course, each VM can be placed separately into its own VLAN, and even have separate vNICs in separate VLANs. So, there's no One True Way. However, DO NOT give the management OS a Management vNIC in X VLAN and then set up a separate vNIC specifically for talking with VMs. You gain exactly zero benefits (uncommon edge cases excluded). Furthermore, this all may be academic -- does the management OS need to directly communicate with the VMs via the network anyway? Usually, no. It does the guest service health checks that way, but I only know a few people that even set that up. It does not otherwise impact how the cluster operates. Most importantly, neither the host nor the cluster needs any network connectivity to a VM in order to provide necessary services.

    OK, so we're at functional stand-alone configuration, right? One team, one virtual switch, one vNIC for Management, and all the VMs humming along in their own networks (hypothetically maybe). Next is to put the Hyper-V hosts into a cluster.

    A cluster adds the need for reliable inter-node communication. That rule goes for any Microsoft Failover Cluster, Hyper-V or otherwise. We've already dealt with Management. We could stop there -- it will probably work fine. But, with only one IP network, we only have one unique IP pathway for traffic to use. So, management and Live Migration and cluster traffic will always use the same IP path, which means the same physical NIC. Your team will ensure that as long as one physical NIC works, that traffic will travel. But, with only one IP, it will always use only one physical path even when both are up.

    To get that second path, we need a second IP subnet on a second NIC. Since we're using one vSwitch on one team (or SET), then we do that with a vNIC. We give it an IP and a subnet mask only and forbid it from registering in DNS, because doing otherwise makes a multi-homed nightmare mess. The cluster will figure out what to do with it and that's all that we want.

    So now we have a cluster. Each node has one team, one virtual switch, and two vNICs. One vNIC is used for management traffic, the other one isn't. By default, with absolutely no effort on your part, the cluster will use SMB Multichannel for inter-node communication (cluster heartbeat, CSV, etc.). It will look at your two IP networks and pick the quietest one for small traffic and it will automatically balance bigger traffic. You have two IPs and two physical paths, so that's about the best balancing you'll reasonably get. You could add more vNICs but diminishing returns will start with the very next one you add after those initial two.

    All that's left is Live Migration. You could do nothing; it will probably be fine. However, since we know that a lot of things will use that Management vNIC, I like to have Live Migration prefer the second vNIC. That's why I call my second one "Live Migration" and why I always configure it to be preferred. If you have RDMA-capable pNICs, then you'll configure Live Migration to use SMB and then it will use SMB Multichannel and even this will all be moot. But, most of us don't have RDMA-capable pNICs, so I think that my configuration works more universally.

    And that's it. You're done.

    But then there's the question about using a cluster network for VMs to communicate. Don't, because you can't. The VMs themselves are thoroughly unaware of the cluster. Their networking ALWAYS goes through their hosts virtual switch -- completely unrelated to anything in clustering. They will not try to talk to the cluster unless you've got some app running in a VM that wants to (like Windows Admin Center in a VM). But, apps like that will use the VM's IP to call out to the cluster's management IP which is on the management network. If the cluster wants to talk to a VM, then it will pick a host and use its management IP to call out to the VM's IP. Yes, you could add a vNIC so that the hosts and the VMs are in the same IP/VLAN, but a) what problem did you solve by not just putting the Management IP in the same network? and b) why are your clusters and VMs talking to each other so much that you need that solution? Making a vNIC or a cluster network specifically to talk to VMs is a pointless endeavor that accomplishes nothing.

    HTH


    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    • Marked as answer by Heybuzzz76 Thursday, August 16, 2018 12:18 AM
    Monday, August 13, 2018 10:34 PM
  • If you left the "allow" box check and then added two vNICs, then you now have three vNICs and one of them is useless. The "allow" checkbox is a dirty liar that doesn't mean a word of what it says. If you named your switch 'ConvergedVMSwitch' then the stowaway vNIC will show up with that name and you can toss it overboard with "Remove-VMNetworkAdapter -ManagementOs -Name 'ConvergedVMSwitch'.

    You must add an IP address in a unique subnet to each additional vNIC to be used by a Failover Cluster. That is how it distinguishes networks. Whether or not you place it in its own VLAN is up to you; the cluster is 100% ignorant of VLANs and operates solely at layer 3. You just need to make sure that the vNIC-to-VLAN assignment is uniform across your cluster or you'll break connectivity.


    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    • Marked as answer by Heybuzzz76 Thursday, August 16, 2018 12:18 AM
    Wednesday, August 15, 2018 4:02 AM
  • I have a 10 host cluster and each has the LiveMigration vNIC would I need 10 separate IP's one for each?

    This is correct. And they need to all be in the same subnet. The cluster doesn't know anything about "Live Migration vNIC". All it knows is that each host has a bunch of unique IP endpoints. It will look across all nodes in the cluster and try to match up NICs in the same subnet(s). Each subnet it finds will be called a "cluster network".

    Any stray IPs outside a fully matched network will be formed into a "partitioned" network. You don't want that. So, if you're going for a Management network and a Live Migration network and you open up Failover Cluster Manager to find three or more cluster networks, then you need to fix your IPs.

    The access point for a cluster that just runs Hyper-V doesn't really do much, so I don't invest a lot of time on that. It just needs a free IP on the management network. You handled that correctly.


    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    • Marked as answer by Heybuzzz76 Thursday, August 16, 2018 12:17 AM
    Wednesday, August 15, 2018 7:07 PM
  • There is no reason that it wouldn't work without a single IP. By default, all networks are enabled to carry cluster and Live Migration traffic. We are adding the second IP network primarily to give it load balancing capability.

    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    • Marked as answer by Heybuzzz76 Thursday, August 16, 2018 7:02 PM
    Thursday, August 16, 2018 2:13 AM

All replies

  • Hi,

    Thanks for your question.

    Please check my understanding of your desired implementation according to the current description. Would you like to have all NICs to split the different functions? If I misunderstand, don’t hesitate to let me know.

    May I know that if there are 10 VMs in 2016 HyperV to form the Hyper-v cluster, and 2 FC cards, 2*10G E for NIC taming and 4*1G E for other separate traffic in the entire environment?

    Here’s a similar situation on the forum as yours, please try the following thread to see if it will satisfy your implemtation.

    https://social.technet.microsoft.com/Forums/en-US/1cb28bb3-0e6b-478f-bbb6-3532e7bb16a3/nic-teaming?forum=virtualmachinemgrclustering

    Hope this helps. If you have any question or concern, please feel free to let me know.

    Best regards,

    Michael


    Please remember to mark the replies as an answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com

    Friday, August 3, 2018 7:02 AM
    Moderator
  • Hi,

    Thanks for the input and links.


    My pod will have 10 DL 360, clustered, LUNs attached using 2 8gig FC cards.

    I think this is what I would like to do with my available NICs.

    Team 2 1 Gig NICs for Management - VLAN 1

    Team 2 10 Gig NICs for VM network, Live Migration, Cluster. VLAN 2

    I guess my question is for the 2 10 Gig NICs should I put VM network traffic on it's own VLAN (VLAN 3) and have Live Migration and Cluster share their own VLAN (VLAN 2) or is that overdoing it?

    If you have another idea on how I can slice up my available NICs please let me know.

    Thanks


    • Edited by Heybuzzz76 Monday, August 6, 2018 1:38 PM .
    Friday, August 3, 2018 4:48 PM
  • Hi,

    Anyone have any suggestions for me?

    Thanks!

    Monday, August 6, 2018 11:50 PM
  • Hi,

    Sorry for my delay.

    I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.

    Thank you for your understanding and support.

    Best regards,

    Michael


    Please remember to mark the replies as an answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com

    Tuesday, August 7, 2018 1:36 AM
    Moderator
  • Hi,

    Based on my experience, we recommend to configure network by the topology of the following figure.

    For example: We have four separated vlans, vlan1, vlan2,vlan3,vlan4,

    2*10 Gig E       

    This is a teaming NIC-----Vlan 1,  please create external virtual switch based on this NIC, it used for virtual machine access. Configure it as “cluster and client” in failover manager like below. Also it can be configurated for live migration.


    4*1 Gig E

    One is used for management network ----Vlan2   Configure it as “cluster and client” in failover manager

    One is used for heartbeat network----Vlan3  Configure it as “cluster only” in failover manager

    The last two is used for live migration-----Vlan4,   configure it as Teaming NIC, and Configure it as “None” in failover manager.

    Hope this helps. If you have any question or concern, please feel free to let me know.

    Best regards,

    Michael


    Please remember to mark the replies as an answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com



    Tuesday, August 7, 2018 7:42 AM
    Moderator
  • Thanks for the awesome detail Michael!

    If I can only use (limited switch ports) the 2 10 GIG E NICs and 2 of the 1 GIG E NICs how would you suggest I combine the NICs for that setup?

    Thanks!

    Wednesday, August 8, 2018 1:29 PM
  • Hi,

    May I ask that how many ports you can assign in your Switch? 

    As mentioned, because live-migration can traffic through other network except "none", we can select one or more network and set priority for live-migration. 

    Here's a link for your reference, hope this helps.

    http://www.msserverpro.com/best-practices-setting-hyper-v-cluster-networks-windows-server-2016/ 

    Please Note: Since the web site is not hosted by Microsoft, the link may change without notice. Microsoft does not guarantee the accuracy of this information.

    Best regards,

    Michael


    Please remember to mark the replies as an answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com

    Friday, August 10, 2018 3:59 AM
    Moderator
  • HI,

    On the server side I would only like to use the 2 10 GIG E (teamed) NICs, but can add in 2 1 GIG E NICs if needed.

    The top of rack networking switch is 48 ports.

    I can create multiple VLANs to isolate the traffic and trunk those to any physical switch ports as needed.

    I read the article, but it did not see where he mentioned how many physical NICs he was using.

    Thanks 

    Monday, August 13, 2018 4:10 PM
  • IN your pictures above I see multiple networks listed. Do these populate when the OS see's additional VLANs that are trunked to the physical switch ports? 

    At the moment I only have one VLAN trunked to my Teamed 10 GIG E NICs so I only see Cluster Network 1.

    Thanks

    Monday, August 13, 2018 5:02 PM
  • That's not a great article. I wouldn't follow it. The "Service Network" in particular is an utterly useless waste of overhead and that's just the most egregious of the immediately standout problems.

    Forget about your 1G NICs. The 10Gs are just embarrassing them. Disable the 1Gs and focus on your 10Gs.

    On each host, team the two 10Gs. I would probably use SET in this case but a team with a Hyper-V switch would be OK if you prefer.

    On each host, make two vNICs. On one vNIC on each, give it a full IP setup (IP, netmask, DNS) in a "real" network that you'll use for management. On the other vNIC, give it a partial IP setup IN A DIFFERENT SUBNET (IP and netmask only; prevent it from registering in DNS). Most people would consider it a best practice to put that second network and those vNICs into a separate VLAN but I'll let you decide if that's what you want to do.

    Once that's done, Failover Cluster Manager will recognize those two networks. Name them "Management" and "Live Migration", or whatever you like. Set Live Migration so that it prefers the "Live Migration" network but secondarily use the Management network.

    That's plenty. You really don't have to do a lot of work.

    If it makes you feel better, you can go back and add an additional network for Cluster traffic (third vNIC on each host, third IP set with address IN A DIFFERENT SUBNET and mask only, and third VLAN). But, with a pair of 10G cards and the two IP pathways created by the above, I don't know that it's worth the effort. Probably not enough overhead to hurt you, either.

    DO NOT create a vNIC in the management operating system for virtual machines. DO NOT attempt to create a network in the cluster for virtual machines, either. The VMs CANNOT and WILL NOT use these things. They have their own vNICs and will use those.


    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    Monday, August 13, 2018 8:11 PM
  • Thanks for the info Eric. I was actually just using that article to show the Network Cluster 1,2,3... I like your way and have read https://www.altaro.com/hyper-v/architect-networks-hyper-v-cluster/.

    A few questions... 

    How many REAL networks should I have trunked to the 10 Gig E switch ports? I was thinking VLAN1 for the physical hypervisor, cluster, live migration and VLAN2 for the actual VM's. Anything I'm missing here... My storage is coming in from two 8gig fiber ports.

    Questions...

    -If I choose SET I assume I disable LACP on the switch side and have a different protocol added? 

    -If I don't choose SET would I just keep LACP and dynamic load balance in the Team's advanced options?

    -DO NOT create a vNIC in the management operating system for virtual machines. = Sorry, but I'm not following what you're saying.

    -DO NOT attempt to create a network in the cluster for virtual machines, either. The VMs CANNOT and WILL NOT use these things. They have their own vNICs and will use those. = I think I understand, but you explain differently please.

    I really appreciate you time.



    Monday, August 13, 2018 9:19 PM
  • You have a lot of interrelated questions and they have interrelated answers, so rather than hit them each individually, I will tackle the entire problem holistically in narrative fashion.

    Let's kick it off with a look at the "Big Picture" problems that you need to solve:

    • You need the VMs to be able to work independently from each host in your cluster
    • You need each node in the cluster to correctly operate as a member of the cluster

    I make these distinctions because Hyper-V and Failover Clustering are two separate things. They mesh well, but they are two separate things. This is important.

    First, you need Hyper-V hosts that work independently. If you don't have that, then your VMs can't work independently when they are member nodes, right?

    So, to satisfy that, you need a Hyper-V switch. You can:

    • Make a team and then put a Hyper-V virtual switch on it. That gives you access to all teaming and load balancing modes at the expense of greater management overhead
    • Create a SET. That restricts you to switch independent mode.

    Truthfully, I think the world has been moving away from LACP for a while. I was a proponent of it when Microsoft introduced it with 2012 but I've since become rather disillusioned. It just doesn't work consistently enough with other vendors and everyone blames everyone else for that. I now use switch independent mode even when I don't use SET. The good news is that it doesn't really hurt anything performance-wise, so you're not losing out.

    Whichever way you choose, you get a virtual switch. As for physical ports, I want to reiterate that 1G and 10G don't mix well and 1G is so outclassed by 10G that it's a waste of your time to try. Kill your 1Gs and use only your 10Gs.

    So, now you've got a virtual switch/team and no other outside connectivity, right? So you need a vNIC in the management OS for the management OS to use. That's aptly enough what we call the "Management" vNIC. That's the one that needs a full IP setup. What network/VLAN? That's up to you. In smaller networks, there is typically only one IP network for all of the servers to use, so I would put it in that network/VLAN. In larger networks, there is commonly one (or more) network just for virtualization servers to use, so I'd put it in that VLAN. Do you need a separate network? I can't answer that for you. Isolation has benefits, but it brings routing and firewalling and other things. If you're not invested enough to do those things fully and carefully, then it's not worth it.

    When you create VMs, you'll stick them into a VLAN or leave them untagged, whatever suits you. Do they need to be in the same network/VLAN as the management operating system? If you want the management operating system to talk to the VMs without using a router, then it's best to put the Management vNIC into the same network/VLAN as the VMs. If you want the VMs to be able to talk to the management OS through a router/firewall, then it's best to put the Management vNIC into a different network/VLAN. Of course, each VM can be placed separately into its own VLAN, and even have separate vNICs in separate VLANs. So, there's no One True Way. However, DO NOT give the management OS a Management vNIC in X VLAN and then set up a separate vNIC specifically for talking with VMs. You gain exactly zero benefits (uncommon edge cases excluded). Furthermore, this all may be academic -- does the management OS need to directly communicate with the VMs via the network anyway? Usually, no. It does the guest service health checks that way, but I only know a few people that even set that up. It does not otherwise impact how the cluster operates. Most importantly, neither the host nor the cluster needs any network connectivity to a VM in order to provide necessary services.

    OK, so we're at functional stand-alone configuration, right? One team, one virtual switch, one vNIC for Management, and all the VMs humming along in their own networks (hypothetically maybe). Next is to put the Hyper-V hosts into a cluster.

    A cluster adds the need for reliable inter-node communication. That rule goes for any Microsoft Failover Cluster, Hyper-V or otherwise. We've already dealt with Management. We could stop there -- it will probably work fine. But, with only one IP network, we only have one unique IP pathway for traffic to use. So, management and Live Migration and cluster traffic will always use the same IP path, which means the same physical NIC. Your team will ensure that as long as one physical NIC works, that traffic will travel. But, with only one IP, it will always use only one physical path even when both are up.

    To get that second path, we need a second IP subnet on a second NIC. Since we're using one vSwitch on one team (or SET), then we do that with a vNIC. We give it an IP and a subnet mask only and forbid it from registering in DNS, because doing otherwise makes a multi-homed nightmare mess. The cluster will figure out what to do with it and that's all that we want.

    So now we have a cluster. Each node has one team, one virtual switch, and two vNICs. One vNIC is used for management traffic, the other one isn't. By default, with absolutely no effort on your part, the cluster will use SMB Multichannel for inter-node communication (cluster heartbeat, CSV, etc.). It will look at your two IP networks and pick the quietest one for small traffic and it will automatically balance bigger traffic. You have two IPs and two physical paths, so that's about the best balancing you'll reasonably get. You could add more vNICs but diminishing returns will start with the very next one you add after those initial two.

    All that's left is Live Migration. You could do nothing; it will probably be fine. However, since we know that a lot of things will use that Management vNIC, I like to have Live Migration prefer the second vNIC. That's why I call my second one "Live Migration" and why I always configure it to be preferred. If you have RDMA-capable pNICs, then you'll configure Live Migration to use SMB and then it will use SMB Multichannel and even this will all be moot. But, most of us don't have RDMA-capable pNICs, so I think that my configuration works more universally.

    And that's it. You're done.

    But then there's the question about using a cluster network for VMs to communicate. Don't, because you can't. The VMs themselves are thoroughly unaware of the cluster. Their networking ALWAYS goes through their hosts virtual switch -- completely unrelated to anything in clustering. They will not try to talk to the cluster unless you've got some app running in a VM that wants to (like Windows Admin Center in a VM). But, apps like that will use the VM's IP to call out to the cluster's management IP which is on the management network. If the cluster wants to talk to a VM, then it will pick a host and use its management IP to call out to the VM's IP. Yes, you could add a vNIC so that the hosts and the VMs are in the same IP/VLAN, but a) what problem did you solve by not just putting the Management IP in the same network? and b) why are your clusters and VMs talking to each other so much that you need that solution? Making a vNIC or a cluster network specifically to talk to VMs is a pointless endeavor that accomplishes nothing.

    HTH


    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    • Marked as answer by Heybuzzz76 Thursday, August 16, 2018 12:18 AM
    Monday, August 13, 2018 10:34 PM
  • Wow... Thanks sir. 

    I'm going to re-read that a few times and let it sink in before I even ask another question. : )

    As you stated the most important thing I was missing is hyper-v and failover clustering are completely separate. I find myself trying to control networking in virtual switch manager. From what I can tell you don't really do much there...

    My virtualization experience is with VMware so I'm trying to forget port groups and vSwitches.

    Thanks!!!!

    Monday, August 13, 2018 11:05 PM
  • OK this is what I've done so far...

    Created a Team with my 2 10 Gig E cards. I'm using LACP (switch ports were also already set to LACP) and dynamic for now.

    Created a new External virtual switch and chose Microsoft Network Adapter Multiplex Driver (this was the new option created after the team was created). Allow Mangement was left checked. I left VLAN ID blank for now...

    I created 2 new vNICs by running:

     Add-VMNetworkAdapter -ManagementOS -Name “Management” -SwitchName “ConvergedVMSwitch”

    and

     Add-VMNetworkAdapter -ManagementOS -Name “LiveMigration” -SwitchName “ConvergedVMSwitch”


    I went ahead and gave the the new Managemnt vNIC a real IP/DNS/GW that is registered in DNS. Pings

    So now I need to give the "LiveMigration" vNIC IP info... In my scenario I need to have the hypervisor in one VLAN and have all the VM's on another. So my question is do I add a static IP from the 2nd VLAN to the new LiveMigration vNIC?

    Once I figure the above out I will move on to building the cluster.

    Thanks!!!

    Tuesday, August 14, 2018 9:12 PM
  • If you left the "allow" box check and then added two vNICs, then you now have three vNICs and one of them is useless. The "allow" checkbox is a dirty liar that doesn't mean a word of what it says. If you named your switch 'ConvergedVMSwitch' then the stowaway vNIC will show up with that name and you can toss it overboard with "Remove-VMNetworkAdapter -ManagementOs -Name 'ConvergedVMSwitch'.

    You must add an IP address in a unique subnet to each additional vNIC to be used by a Failover Cluster. That is how it distinguishes networks. Whether or not you place it in its own VLAN is up to you; the cluster is 100% ignorant of VLANs and operates solely at layer 3. You just need to make sure that the vNIC-to-VLAN assignment is uniform across your cluster or you'll break connectivity.


    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    • Marked as answer by Heybuzzz76 Thursday, August 16, 2018 12:18 AM
    Wednesday, August 15, 2018 4:02 AM
  • Crap I did check that box and was wondering where that came from. Removed...

    When you say each additional vNIC does that mean just to the vNIC on the host I'm on while building the cluster or do you need to give each vNIC (in this case LiveMigration) a unique IP on each host in the cluster?

    I have a cluster setup now (without any additional vNICs) and I just used one FQDN/IP to setup the cluster. When I got to the "access point for admin the cluster" screen I added the real network /27 and one static IP. Maybe this is completely separate from what you're referring too.

    If not if I have a 10 host cluster and each has the LiveMigration vNIC would I need 10 separate IP's one for each?

    Thanks!

    Wednesday, August 15, 2018 6:57 PM
  • I have a 10 host cluster and each has the LiveMigration vNIC would I need 10 separate IP's one for each?

    This is correct. And they need to all be in the same subnet. The cluster doesn't know anything about "Live Migration vNIC". All it knows is that each host has a bunch of unique IP endpoints. It will look across all nodes in the cluster and try to match up NICs in the same subnet(s). Each subnet it finds will be called a "cluster network".

    Any stray IPs outside a fully matched network will be formed into a "partitioned" network. You don't want that. So, if you're going for a Management network and a Live Migration network and you open up Failover Cluster Manager to find three or more cluster networks, then you need to fix your IPs.

    The access point for a cluster that just runs Hyper-V doesn't really do much, so I don't invest a lot of time on that. It just needs a free IP on the management network. You handled that correctly.


    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    • Marked as answer by Heybuzzz76 Thursday, August 16, 2018 12:17 AM
    Wednesday, August 15, 2018 7:07 PM
  • Awesome thanks Eric!

    Out of curiosity- My first setup is a 3 host hyper-v cluster just using the vNic (configured with REAL IP) that is added when you check "allow". I created a cluster and used a REAL IP in the same VLAN as the 3 hosts. All 4 IP's (host 1,2,3 cluster 1) are registered in DNS with their names. I built VM's, added them in the virtual machine role, and from what I can tell everything including Live Migration works. How is this cluster able to function just using a single REAL IP?

    Thanks!

    Wednesday, August 15, 2018 8:02 PM
  • There is no reason that it wouldn't work without a single IP. By default, all networks are enabled to carry cluster and Live Migration traffic. We are adding the second IP network primarily to give it load balancing capability.

    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    • Marked as answer by Heybuzzz76 Thursday, August 16, 2018 7:02 PM
    Thursday, August 16, 2018 2:13 AM