locked
Windows 2012 R2 HyperV clustering RRS feed

  • Question

  • Hi everyone,

    We have purchased 2 Windows 2012 R2 datacenter editions and trying to setup Hyper-V cluster.

    I have already installed and configured Failover Cluster as windows feature, Enabled Hyper-v on both servers and created Hyper-V cluster.

    Now what I want to accomplish is few things:

    1. I have total of 6 Nic's per server, so dont know how to create full fail over and HA with nics (Extremely simple in VMWare)

    2. HyperV cluster keep assigning 169.254 IP automatically, and also have manually configured 192.168.1.x

    3. What can I use for remote management for both hosts in one window?

    4. Why all Hosts cant show shared disk, only 1 server at a time can see shared disk, other server shows as Reserved.

    We are using Dell VRTX, servers and storage within 1 Chassis.
    All the servers and storage is connected internally
    Servers has 2 internal ports plus I got 1, 4 port nic for each server, so each server will have 6 nic ports.
    What I want to see if possible like wise vmware, from every host you can see and manage shared disks
    All disks are SAS, but again no external connection so I think it doesn't really matter

    Right now When I try to open HyperV manager, I cant even change the default VM and disk path to shared storage, as only 1 server can see it.

    NOT TO MENTION BUT VMWARE IS EXTREMELY SIMPLE AND RELIABLE

    Thanks in Advance.

    Friday, June 20, 2014 3:29 PM

Answers

  • its highly recommended you assign static addresses to the hosts BEFORE creating the cluster as the cluster will require an IP in the same network as the hosts.

    so break the cluster and assign addresses to the NICs first. also i recommend renaming the NICs for simplicity and troubleshooting

    do the following:

    1. destroy cluster in fail over cluster manager.

    2. rename the the nics to be used in the fail over cluster, name them MGMT, CSV, Live.
     then assign addresses to them manually... MGMT is the one you will remote to and should be your corp network address... CSV and Live can be anything you want as its not requited to be routable outside the cluster so assign them: CSV- 10.0.0.1/24 and Live: - 10.0.1.1/24, any other NICs disable them for now (you can create teamed interfaces later when you confident your cluster is working)

    3. create cluster, make sure to assign an ip to the cluster on your corp network

    when you add the disks to the fail over cluster you need to create a volume on the disk in disk manager then add it to the cluster and click add CSV inside fail over cluster manager.

    this should create you a cluster with a CSV... you should have 3 NICs spare as you mentioned you had 6... you can now try using NIC teaming found on the server manager screen for the local server to create a teamed interface ... this requires you understand your switch setup as you would need to know if you using LACP or switch independent options inside the teaming configuration.. add all 3 spare NICs and most likely use switch independent then add virtual interfaces to the team... these will be virtual NICs found in network connections.. you can now treat this as if they were physical NICs and assign IPs to them

    on a side note WRT your comment on VMWare being simple to use... bear this in mind...

    these simple cluster setups are covered in length within the MCSE 2012 course and certification.... its time to familiarise yourself with the content within this course and put down the test king papers....

    • Proposed as answer by Alex Lv Tuesday, June 24, 2014 1:50 AM
    • Marked as answer by mburmi Tuesday, July 15, 2014 6:36 PM
    Tuesday, June 24, 2014 1:36 AM
  • No, you do not NEED to change anything.  Actually, the cluster is quite flexible.  You can build a cluster on a single NIC and it will work just fine.  However, there are different things you may want to ensure when you have multiple NICs.  None of the following is NEEDED, but they are recommended.

    You should have at least two paths for cluster communication.  This is often handled by configuring a minimum of two networks, one for client and cluster communication and the other just for cluster communication.  This can also be accomplished by configuring a NIC team and just having it used for client and cluster communication.

    On the live migration network, some people will allow cluster communications.  That's fine.  My preference is to disallow it and to set up a QoS on the live migration network and have two or more other networks that allow cluster communication..  Either way works.  In your case, your live migration network with role 0 means that it is not allowing cluster communication on that NIC.  But you have defined two other networks that do allow for cluster communication.  You are covered.


    . : | : . : | : . tim

    • Marked as answer by mburmi Tuesday, July 15, 2014 6:37 PM
    Thursday, July 3, 2014 3:08 PM

All replies

  • 1.  Microsoft has a best practice for networking, not sure of the link or if it's current for 2012 R2.  However, the norm, at least where I've been looking, is:

    - 1 NIC: Host machine, system administration, updates, etc.
    - 1 NIC: Heartbeat for Failover Cluster
    - 1 NIC: Live Migration for Hyper-V [Priority for which network the cluster will use for LM is in failover cluster manager.
    - Rest of NICs: You can bond together using NIC Teaming (found in server manager) or you can use each nic to create a separate virtual switch

    NOTE: Make sure you go into the virtual network manager for each host, go to the properties of each virtual switch and turn off the option for the host to manage the adapter.  (this really screwed me over)

    2. Your 169 address is probably on the virtual switch adapter.  Put the IP back to dynamic and follow my last bit of advice in #1

    3. Remote management of what?  If you mean the Hyper-V VMs, you can do this through failover cluster manager, and in fact have to if you don't have Virtual Machine Manager.  Otherwise, server manager has server grouping if you want to manage roles and such from one node.

    4. One host will have ownership of a cluster shared volume, however it will be mounted on all eligible nodes, usually under C:\ClusterStorage.  You will only be able to manage the disk through Disk Management on the owner node.

    As for creating, managing, touching, anything with VMs:  Now that you have it in a cluster, you must do it through failover cluster manager.  VMs are now classified as roles in the cluster.  You can add a new 'role' and specify virtual machine, the storage location and such will default to cluster storage when doing it through failover cluster manager.

    • Proposed as answer by Alex Lv Monday, June 23, 2014 2:56 AM
    Friday, June 20, 2014 4:26 PM
  • "VMWARE IS EXTREMELY SIMPLE AND RELIABLE"

    Matter of perspective.  I imagine you have been working with VMware for a while and now you are learning Windows Server.  I'm the opposite.  I find Microsoft clustering very simple and reliable but would find VMware confusing because I haven't worked with it.

    I have total of 6 Nic's per server, so dont know how to create full fail over and HA with nics (Extremely simple in VMWare)"

    Extremely simple in Windows Server, too.  Create one or more teams and carve out the networks you need.  Here is a useful link that covers a lot of things, including recommendations on networks. Windows
    Server 2012 Hyper-V Best Practices (In Easy Checklist Form)

    "HyperV cluster keep assigning 169.254 IP automatically, and also have manually configured 192.168.1.x"

    Going to need more information on this one.  Which network?  How are your networks configured.

    "What can I use for remote management for both hosts in one window?"

    The built-in Server Manager that comes with Windows can remotely manage both hosts in one window.  I have set up a Windows 8.1 system with the Remote Server Administration Toolkit (which is basically Server Manager and a bunch of other utilities like Cluster manager, Hyper-V manager, etc.) to manager a whole bunch of servers, both physical and virtual.

    "Why all Hosts cant show shared disk, only 1 server at a time can see shared disk, other server shows as Reserved."

    That's the way Microsoft clustering has always worked.  A resource (disk, IP address, application instance, etc.) is owned by one node for management purposes.  The disks are shared so that should the need arise, ownership can be quickly transferred.  Ownership for Hyper-V Cluster Shared Volumes in particular means that they control writing meta-data (file creations, deletions, extensions, etc.). If more than one node were to try to do this meta-data updating, the volume would be corrupted quickly because the shared disk is just an NTFS (or optionally ReFS) disk.  This means that moving the disk in and out of a cluster does not require any reformatting.  But with CSV, even though the volume is owned by a single node, all nodes have direct read/write capability to the virtual hard drives being used by the VMs running on the node.  This also means that if, for whatever reason, a node loses physical connectivity to the volume, the VMs on that node continue running.  All IO is directed over the network to the owning node. 


    . : | : . : | : . tim

    • Proposed as answer by Alex Lv Monday, June 23, 2014 2:56 AM
    Friday, June 20, 2014 4:39 PM
  • Hi,

    The Hyper-V have the entirely different design, about how to better design the Hyper-V cluster network, you can refer the following related KB:

    Network Recommendations for a Hyper-V Cluster in Windows Server 2012

    http://technet.microsoft.com/en-us/library/dn550728.aspx

    The 169.254 address is the the Internet Assigned Numbers Authority (IANA) has reserved 169.254.0.0-169.254.255.255 for Automatic Private IP Addressing. As a result, APIPA provides an address that is guaranteed not to conflict with routable addresses.

    If you want to remote management the cluster node you can use the remote desktop service. Before you create the failover cluster you need to run the failover cluster validation first, the validation will quick locate the potential problems.

    The related KB:

    Use Validation Tests for Troubleshooting a Failover Cluster

    http://technet.microsoft.com/en-us/library/cc770807.aspx

    Hope this helps.


    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

    Monday, June 23, 2014 3:20 AM
  • Did you run the cluster validation wizard?  It will generally tell you if you have a misconfiguration of your environment on your disks.  Generally what I do is ensure each node of the cluster can properly view the disk by mounting it on each node of the cluster before I even build the cluster.  It's not a requirement, but I do it just as a guarantee.  Never had a problem with any configuration that I first mounted the volume on each node.

    Once the disks are properly configured (i.e. available to each node in the cluster), the cluster will automatically share the disks.  Not sure what you mean by "how to make 2nd server to see shared disk?" If you have properly configured the disks, the second (through 64th) node will see the shared disk.

    Maybe you are misinterpretting what 'shared' means in the Microsoft lingo.  It is shared at the highware level, i.e. each node is physically connected and can take ownership.  But only one node in the cluster will ever own the disk.  In some roles, only the owning node can access the disk.  This changes if you create the disk to be a Cluster Shared Volume.  In that case, one disk still owns the disk, but each Hyper-V node can read-write directly to the Cluster Shared Volume.

    I think you need to provide a little more information about your configuration and what you are trying to do.  Then we can be more specific in our answers to help you.


    . : | : . : | : . tim

    Monday, June 23, 2014 5:15 PM
  • Thanks guys, CSV and gpo issue I resolved now but only APIPA is still there, below is the screen sho

    I guess i cant paste a screen shot here
    in the server manager, when click on HyperV, on the right side it shows
    host 1 is 192.168.1.61 and hpst 2 is .62 and cluster IP is .60

    host1 169.254.1.182, 192.168.1.60, 192.168.1.61
    host2 169.254.1.170, 192.168.1.62
    cluster 169.254.1.182, 192.168.1.60, 192.168.1.61

    And where exactly we can setup NIC teaming for various purpose like, management, storage, vm's etc?

    Monday, June 23, 2014 7:52 PM
  • its highly recommended you assign static addresses to the hosts BEFORE creating the cluster as the cluster will require an IP in the same network as the hosts.

    so break the cluster and assign addresses to the NICs first. also i recommend renaming the NICs for simplicity and troubleshooting

    do the following:

    1. destroy cluster in fail over cluster manager.

    2. rename the the nics to be used in the fail over cluster, name them MGMT, CSV, Live.
     then assign addresses to them manually... MGMT is the one you will remote to and should be your corp network address... CSV and Live can be anything you want as its not requited to be routable outside the cluster so assign them: CSV- 10.0.0.1/24 and Live: - 10.0.1.1/24, any other NICs disable them for now (you can create teamed interfaces later when you confident your cluster is working)

    3. create cluster, make sure to assign an ip to the cluster on your corp network

    when you add the disks to the fail over cluster you need to create a volume on the disk in disk manager then add it to the cluster and click add CSV inside fail over cluster manager.

    this should create you a cluster with a CSV... you should have 3 NICs spare as you mentioned you had 6... you can now try using NIC teaming found on the server manager screen for the local server to create a teamed interface ... this requires you understand your switch setup as you would need to know if you using LACP or switch independent options inside the teaming configuration.. add all 3 spare NICs and most likely use switch independent then add virtual interfaces to the team... these will be virtual NICs found in network connections.. you can now treat this as if they were physical NICs and assign IPs to them

    on a side note WRT your comment on VMWare being simple to use... bear this in mind...

    these simple cluster setups are covered in length within the MCSE 2012 course and certification.... its time to familiarise yourself with the content within this course and put down the test king papers....

    • Proposed as answer by Alex Lv Tuesday, June 24, 2014 1:50 AM
    • Marked as answer by mburmi Tuesday, July 15, 2014 6:36 PM
    Tuesday, June 24, 2014 1:36 AM
  • Thanks, I think I didnt mentioned earlier but its important to know that Dell shared storage on VRTX is presented to servers as DAS.

    so in that case I might not need to create any csa network on different subnet right?

    if thats the case I think I need to create only one virtual switch in Hyper V manager with our lan subnet?

    and how we create different virtual switch for live migrations? (do we require that)

    Tuesday, June 24, 2014 8:09 PM
  • I'm pretty sure that Dell has recommended practices for configuring a Hyper-V cluster with their storage.  You might want to get their document and follow their recommendations.

    Live Migration does not use a virtual switch.  It uses a physical NIC as it is used by the host, not by the guest VMs.  Guest VMs need virtual switches.  The only time the host needs a virtual switch is if you are sharing a network between the host and the guests, and since live migration is never used by the guests, you would never share that network.


    . : | : . : | : . tim

    Tuesday, June 24, 2014 10:09 PM
  • weather its DAS or SAN is irrelevant for CSVs, it comes down to weather or not  both servers can see the same storage , both DAS and SAN can create CSVs within a cluster. 

    the CSV and Live networks are STILL required, these are for cluster communications between hosts... to put it simply for the CSV network...   One of the features of CSVs is that if the storage link (iSCSI, fibre, DAS) becomes unavailable for any reason on one node, storage traffic can be redirected over the cluster network to another node and hence to the storage device, it is also used for Snapshots of the VMs and traffic is redirected over the CSV network during the snap process created by the VSS Writer of the CSV owner. .. so in short , yes you still need ALL networks i mentioned earlier. and the CSV and Live MUST be on a different PRIVATE network... this is to eliminate unwarranted network usage on your corp network... 

    i still strongly believe you should not create the teamed interfaces for this cluster with your limited knowledge , create them using 3 phisicals like i mentioned earlier ... once your cluster is established and you are happy with it you can start mucking around with the 3 spare NICs and create a teamed interfaces... these teamed interfaces can then be slowly and systematicly added to the CSV, Live and MGNT networks and you can slowly remove the 3 Physicals set the cluster up with and add them into the team after.

    Tuesday, June 24, 2014 11:31 PM
  • I tried to create cluster with single nic and when creating nioc teaming at later stage it looks like mess things up, with new virtual nic and dhcp IP's.
    I am really confused, I never get this confused even when I used Linux for first time, and windows I am using for couple years.

    Is there any real good step by step guide available?

    But one more thing, I tried to create cluster with only 1 nic and used it for pretty much everything (cluster, mgmt, vm traffic) and it seems to be working
    and do I need default gateway for csv or live networks?

    but when you say it is required to create different networks, that confuses me

    And yeah one more thing regardless what I do, cluster virtual adapter always getting APIPA address.

    Its Definitely MS need some good brains to develop things, and not just referral employees.

    • Edited by mburmi Wednesday, June 25, 2014 2:05 PM
    Wednesday, June 25, 2014 1:08 PM
  • No, you do not need default gateways on your CSV or Live Migration networks.  Those only need to talk to the other nodes in the cluster and are on the same IP subnet, so a gateway would never be used.  Even if you are going to live migrate to other servers, I would still configure them all to be on the same subnet dedicated to live migration so no gateway would be needed.

    Yes, there are several good step-by-step guides out there.  Again, I would be surprised if Dell did not have something for their storage platform.  Jose Barreto does a great job of presenting step-by-step guides for many things related to storage and clustering - http://blogs.technet.com/b/josebda/  If you simply search TechNet for 'step-by-step hyper-v clusters' you will come up with several guides.

    If you want an excellent presentation on networking within a cluster, watch this presentation by the clustering Principal Product Manager - http://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/MDC-B337#fbid=?hashlink=fbid  Might be more detail than you want, but it is full of great information.

    For a great primer on Hyper-V networking, see http://blogs.technet.com/b/jhoward/archive/2008/06/16/how-does-basic-networking-work-in-hyper-v.aspx

    For information on NIC teaming, see http://www.microsoft.com/en-us/download/details.aspx?id=40319

    For a good checklist on setting up Hyper-V, including clustering, see http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx

    As you can tell, with a little searching of TechNet, you can find pretty much anything you need.


    . : | : . : | : . tim

    Wednesday, June 25, 2014 2:13 PM
  • NIC Settings


    • Edited by mburmi Wednesday, June 25, 2014 3:19 PM
    Wednesday, June 25, 2014 3:17 PM

  • corp. network is 192.168.1.0/24

    csv 10.10.1.20 (.21 on node 2)
    Live 10.10.10.10 (11 on node 2)
    cluster nic 192.168.1.64 (but when i created cluster I gave 192.168.1.60)
    so now it is showing multiple IP addresses

    1st image shows the nic cards I have and I renamed them
    2nd image shows nic settings in failover cluster
    3rd shows ip settings in server manager

    Questions:
    Why Failover cluster not showing MGMT nic in there? (I have disabled vm network for now)
    first do I really need separate subnets? If yes how can I define that storage uses which nic? (both of the nodes are able to access storage anyways)
    VM Network (Teamed) I want to use for VM traffic to communicate with each other and rest of the network
    and as you can see APIPA is still there

    Thanks again.


    • Edited by mburmi Wednesday, June 25, 2014 4:40 PM
    Wednesday, June 25, 2014 3:18 PM
  • as the CSV and Live are private networks they dont need gateways..

    if you click on the cluster network 2 in fail  over manager it will give you the IPs and NICs in it, if its the MGNT then its picking up the NICs in the Team or the actual team interface itself... i did say leave the teaming alone for now... 

    i would check DNS for the multiple cluster IPs there must still be multiple DNS entries for the cluster from your other attempts in creating a cluster... delete the others , its worth nothing when i say destroy the cluster and start again this is done by using the more actions tab in failover clustering and clicking the detroy cluster option.... then delete and DNS references to the cluster IPs (not the hosts)

    open a command prompt (Run As Administrator) and type for each node:

    cluster node host1.domainname.com /forcecleanup

    this should get rid of any weird lingering config the cluster is picking up.. so starting again fresh...

    create all networks first MGNT , CSV ,Live... disable ALL others and delete the team... will get to this later

    csv 10.10.1.21 (.22 on node 2)
    Live 10.10.10.11 (12 on node 2)

    subnetmask 255.255.255.0

    now create the cluster, you should now have a cluster with the 3 networks in fail over manager.

    now add the storage as explain before by creating the volume then adding the disk into fail over manager and then creating a CSV from it... its at this point a cluster validation check would be good to see what not right with the configuration... address any issues you find in the validation report

    now for the Hyper-V side of this for VMs... (i think you confusing fail over clustering and Hyper-v networking with virtual switching ... they separate entirely) , now that we happy with the cluster and networks we can enable the VM Traffic ones and create a team with them this will create a teamed interface called VM Network... now here is where i think your networking falls over.... when u create an external switch for Hyper-V you need to select the multipexor driver (this is the teamed interface) option NOT the physical NICs named VM traffic

    here is a guide on step by step teaming for hyper-V

    http://www.thomasmaurer.ch/2012/07/windows-server-2012-hyper-v-converged-fabric/

    this should have you up and running....


    Wednesday, June 25, 2014 11:31 PM
  • oh and on the teaming guide we aiming for the one switch for everything option but only after we got it up and running with VM traffic AFTER the cluster is established and stable...
    Wednesday, June 25, 2014 11:45 PM
  • I destroy the cluster and cleanup all the AD entries for the old clusters from DNS, WINS etc. and then created a new cluster.

    CSV, Cluster Network 2, MGMT and Live are not teamed, they are single nics, only thing which is teamed is VM Traffic (VM Network) and that is used for external switch for vm communication.
    and after creating all these network I dont have any spare nics now anyways.

    I have already created a VM and looks like its working fine, I have tested live migration and failover its working too using LIVE nic (I disabled Live Nic and migration broke so its working)

    Only 2 things at this point
    1st - even after creating new cluster and cleaning all the AD objects why there is still APIPA (is there any harm in leaving that as it is?)

    2nd - How I can tell the system to use CSV for storage communication, I dont think so its using CSV nic for anything (But again how can I make sure)

    thanks again.

    Thursday, June 26, 2014 1:09 PM
  • "there is still APIPA "

    A screen shot of Control Panel\All Control Panel Items\Network Connections from both hosts would be helpful.

    "How I can tell the system to use CSV for storage communication"

    Look at the role assigned to the CSV network.  With Windows Server 2012 R2, any network on which you have assigned the role of 1 will be used for CSV.  In the picture of your networks above, you show CSV as None.  That means you have a role of 0, meaning no cluster communication or CSV traffic is going over that link.  Cluster network 2 and Live show as Cluster Only, or role 1.  That means they will handle cluster communication and CSV traffic.

    Generally you should have your host management NIC set up for Cluster and Client (role 3).  CSV is configured as Cluster Only (role 1).  And Live Migration is configured as None (role 0), because you do not want any cluster communication on the live migration network.  It will work with a different role, but there is no need for it.

    To get the roles on the different NICs, you can issue this PowerShell cmdlet:

    Get-ClusterNetwork -Cluster <clustername> | FT Name, Role

    Some of the links I posted earlier provide a fair amount of detail on this information.


    . : | : . : | : . tim


    Thursday, June 26, 2014 1:30 PM
  • PS C:\Users\manjeetb> Get-ClusterNetwork -Cluster cthcluster | FT Name, Role
    Name                                                                                                               Role
    ----                                                                                                               ----
    Cluster Network 2                                                                                            3
    CSV                                                                                                                  0
    Live                                                                                                                  0

    I follow this link to setup network roles
    http://technet.microsoft.com/en-us/library/dn550728.aspx#BKMK_Isolate

    As you see in Netwotk connection , I have MGMT nic as well which is on corp subnet as well, so when I  RDP etc. to server is it useing mgnt or cluster nic (both ar eon corp subnet)




    Thursday, June 26, 2014 1:45 PM

  • sorry I change some settings on network after the post yesterday


    • Edited by mburmi Thursday, June 26, 2014 1:55 PM
    Thursday, June 26, 2014 1:46 PM
  • yes thats good, live migrate is now only on Live network.

    for the CSV traffic priority paste these into pwershell

    ( Get-ClusterNetwork "CSV" ).Metric = 100

    ( Get-ClusterNetwork "Live" ).Metric = 200

    ( Get-ClusterNetwork "Cluster Network 2" ).Metric = 1000

    this will make all CSV traffic use the CSV network first and fail to Live and then lastly Cluster Network 2 (your VM network)... we dont mind CSV failing over to Live network as we dont always have migrations happening BUT we dont want live to be the primary CSV traffic network else when migrations happen the cluster will crumble..

    and you should now be good to go with some VM workload, not bad for your first cluster :)as for the APIPA as long as its not advertised in WINS or DNS dont worry about it, the hosts will use the DNS cluster entry for cluster communications

    Thursday, June 26, 2014 11:26 PM
  • This si what I get when I run this command:

    PS C:\Windows\system32> Get-ClusterNetwork | ft Name, Metric, AutoMetric
    Name                                                                     Metric                              AutoMetric
    ----                                                                     ------                              ----------
    Cluster Network 2                                                 70386                                    True
    CSV                                                                       70385                                    True
    Live                                                                      70384                                    True

    should I still run the above mention command to change the metric?

    Thanks

    Friday, June 27, 2014 1:44 PM
  • CSV will use the network with the lowest metric.  In your case, that is the Live network.  You do not want that.  Go into the Cluster management console and set the CSV network to be available for cluster communication to ensure it has role 1 assigned to it.

    I think you misinterpretted the instructions from the TechNet article you posted.  You interpretted the CSV network to be a storage network.  It is not.  iSCSI would be a storage network.  SMB would be a storage network.  The CSV network is more of a management network.  Yes, IO can go over the CSV network, but it is not a storage network.  You have to enable it for cluster communication because CSV traffic is a form of cluster communication.


    . : | : . : | : . tim

    Friday, June 27, 2014 6:05 PM
  • PS C:\Windows\system32> Get-ClusterNetwork | ft Name, Metric, AutoMetric, role

    Name                                                 Metric                    AutoMetric                          Role
    ----                                                 ------                    ----------                          ----
    Cluster Network 2                             70386                          True                             3
    CSV                                                   30384                          True                             1
    Live                                                  70384                          True                             0

    does it make sense now?

    But as Matt suggested to do

    ( Get-ClusterNetwork "CSV" ).Metric = 100
    ( Get-ClusterNetwork "Live" ).Metric = 200

    ( Get-ClusterNetwork "Cluster Network 2" ).Metric = 1000

    should i change the metric to these values manually or I can leave it as it is (much higher values for metric)

    Means what will be the difference

    Thanks again.



    • Edited by mburmi Friday, June 27, 2014 6:43 PM
    Friday, June 27, 2014 6:40 PM
  • Hi

    i would change them, but that just me being pedantic.  as per Technet:

    "When the cluster sets the Metric value automatically, it uses increments of 100. For networks that do not have a default gateway setting (private networks), it sets the value to 1000 or greater. For networks that have a default gateway setting, it sets the value to 10000 or greater. Therefore, for your preferred CSV network, choose a value lower than 1000, and give it the lowest metric value of all your networks."

    therefore yours are set right... CSV then Live then VM network priority. 

    as for the Role settings they stand for the following:

     For the Role property, represents a private cluster network and 3 represents a mixed cluster network (public plus private). so they are fine configured the way they are

    Sunday, June 29, 2014 11:31 PM
  • actually i missed your Live network role was 0, that means its ignored by the cluster, there for the CSV traffic will use CSV then your VM network

    so yes run the PS commands..
    • Edited by Matt-Wyatt Sunday, June 29, 2014 11:48 PM
    Sunday, June 29, 2014 11:46 PM
  • You can optionally run the PowerShell commands - the values that the cluster assigned are basically doing the same thing the PowerShell commands would do.

    What is the status of your environment now that you have the cluster network, the CSV network, and live migration network configured as they should be?  Are things working properly?


    . : | : . : | : . tim

    Monday, June 30, 2014 10:45 PM
  • PS C:\Windows\system32> Get-ClusterNetwork | ft Name, Metric, AutoMetric, role

    Name                                                 Metric                    AutoMetric                          Role
    ----                                                 ------                    ----------                          ----
    Cluster Network 2                             70386                          True                             3
    CSV                                                   30384                          True                             1
    Live                                                  70384                          True                             0

    I was off for the few days, mine is still the same as I didnt ran any commands yet.

    ( Get-ClusterNetwork "CSV" ).Metric = 100
    ( Get-ClusterNetwork "Live" ).Metric = 200

    ( Get-ClusterNetwork "Cluster Network 2" ).Metric = 1000

    I was wondering the PS command is to change the metric, or it will change the role as well?

    as the Metric is already fine on my setup its just the role for Live is 0.

    thanks..

    Wednesday, July 2, 2014 2:03 PM
  • The PowerShell commands shown would just change the metric - that's the .Metric in the command.  If you wanted to change the role, it would have a .Role in the command, but only 0, 1, and 3 are valid for role.

    The specific number for the metric are not important.  What is important is their relative position.  Lowest metric should be assigned to the CSV network.  Next lowest is live migration.  There are other things that come into play, but for you environment, that is the most important.  If you want the specific details, see the links I posted earlier.


    . : | : . : | : . tim

    Wednesday, July 2, 2014 3:05 PM
  • make sense, so in my case metric is already fine

    only Role for Live is 0, so do i need to change it to something?

    Thanks.

    Wednesday, July 2, 2014 3:24 PM
  • No, you do not NEED to change anything.  Actually, the cluster is quite flexible.  You can build a cluster on a single NIC and it will work just fine.  However, there are different things you may want to ensure when you have multiple NICs.  None of the following is NEEDED, but they are recommended.

    You should have at least two paths for cluster communication.  This is often handled by configuring a minimum of two networks, one for client and cluster communication and the other just for cluster communication.  This can also be accomplished by configuring a NIC team and just having it used for client and cluster communication.

    On the live migration network, some people will allow cluster communications.  That's fine.  My preference is to disallow it and to set up a QoS on the live migration network and have two or more other networks that allow cluster communication..  Either way works.  In your case, your live migration network with role 0 means that it is not allowing cluster communication on that NIC.  But you have defined two other networks that do allow for cluster communication.  You are covered.


    . : | : . : | : . tim

    • Marked as answer by mburmi Tuesday, July 15, 2014 6:37 PM
    Thursday, July 3, 2014 3:08 PM