This is a baseline document on how to configure a Hyperv3 cluster on Windows Server 2012 /Windows Server 2012 R2.

Few points before we start
  • Make sure that the server firmware is updated fully with the latest available one. This is very important as my first installation ended up in issues which was pointing that firmware is not updated.
  • The hardware I used was HP Proliant DL 580 G7 and I did the firmware updates before I started the installation of Windows 2012 server. One major reason was the standalone firmware instllaers whcih is available in HP download site will not run with Windows 2012 servers. It will popup an error that "The software is not supported for installation on this system. The OS is not supported". If there is an option for Firmware updates using a bootable cd, it should work irrespective of the OS version. To be on safer side, do the firmware updates prior to the installation of Windows 2012 Server OS.  HP DL580 G7 drivers/firmwares for Windows 2012 is now available on HP Website. Please ensure all drivers and firmware is updated before starting with the HyperV configuration.
  • Brocade 425/825 4G/8G FC HBA was used to conect with the servers with SAN storage. Though Windows 2012 Server installation includeds the drivers for Brocade, the latest version can be downloaded from Brocade website and the drivers can be updated by installing it.
  • I was not able to upgrade NIC drivers but the default drivers which comes along with Windows 2012 OS seems to be good. With the OS,  2012 Q1-Q2 drivers is what I see on my server for Broadcom BCM5708S as well as NC375i. 

Configuration
        
  Once the OS installation is done, here is the sequence I followed.      

  1. On each server, NICs are configured
    • Configured 1 NIC for Management with a dedicated IP from management VLAN
      • IP Address, Default Gateway, DNS
    • Configured 1 NIC for Cluster Shared Volume with a dedicated ip from CSV VLAN
      • IP Address and subnet only
    • Configured 1 NIC for Heartbeat with a dedicated priviate IP
      • IP Address and subnet only
  2. Disabled Netbios over TCP/IP on all interfaces except Management NIC.
  3. If IPV6 is not used, Disable IPV6 on each NIC.
  4. Unchecked DNS registraion - "Register this connections address in DNS" for all NICs except for the Management NIC.
  5. Renamed each network cards according to its role - I used ( Management, Data-1, Data-2, Data-3, Heartbeat and CSV). Labeling will help to select the right interface while configuring NIC teaming.
  6. I had three NIC Cards available for HyperV virtual machiens and hence configured a Team.
  7. Windows 2012 Server comes with inbuilt teaming feature.
    • To configure teaming, Go to Server Manager -> Local Server -> Nic Teaming
    • If Teaming is not enabled, Click on "Disabled" so that a new window will pop up for configuring teaming
    • Teaming creation is very simple. Click on the drop down menu on "Tasks" and select "New Team"
    • Chose the adapters which you want to include in the team
    • Click on additional properties
    • Chose the Teaming mode - Recommended to have "Switch independent mode"
    •   Choose Load balancing mode - Recommended to have "HyperV Port" if the host OS is Windows Server 2012
    • Choose Load balancing mode - Recommended to have "Dynamic" if the host OS is Windows Server 2012 R2
    • Refer http://social.technet.microsoft.com/wiki/contents/articles/14131.windows-2012-server-nic-teaming-for-hyperv.aspx if you are trying out "Switch independent mode" with Address Hash load balancing mode.
    • Click on OK
    • A new NIC with the Team name will be listed out along with the other interface
    • Configure the Team NIC with an IP Address which will be used for HyperV virtual machines.
      • IP Address and subnet only
    • I used LACP was the Teaming Mode and Address Hash as the Load Balancing mode
  8. Disabled Netbios over TCP/IP on the Team NIC.
  9. Unchecked DNS registration - "Register this connections address in DNS" on Team NIC.
  10. Configure Priority for the Network Adapters
    • Control Panel\Network and Internet\Network Connections
    • Press on ALT Key -> Select Advanced from the Menu
    • Click on Advanced settings
    • Arrange the Network Adapters as below
      • Management - Top of the list
      • Heartbeat  - Second on the list
      • CSV - Third on the list
      • Team Interface - Last on the list
  11. Allow ICMP on Windows firewall on all nodes.
  12. From one server, Ping to Management IP, Heartbeat IP, CSV IP and Team IP  of all other servers and ensure that communication is fine within the conifgured Network.
  13. Install MPIO or Powerpath or any other multipathing utility. I have tested MPIO as well as Powerpath 5.5 (build 289).
  14. Go to Computer Management -> Disk Management.
  15. Ensure that Disk from SAN are listed properly.
  16. Initialize the disk.
  17. Make it online.
  18. Format the Disk as NTFS without assigning a drive letter if you are planning for configuring CSV disk.
  19. Install Failover Clustering feature on all nodes
    • Make use of the new server management dash board to do this remotely
    • From Server Management -> Dash Board, Select the 4th Option - Create a server group
    • Name the group and Click on DNS
    • Type the name of each server to be added in this group and click on search
    • If the search was successful, the server will be listed down
    • Select the server from the result and click on the > button to add the computer to the group
    • Add each servers to the list one be one and finally click on OK
    • Now you will be seeing this group on the Server Manager
  20. Select the new Group created.
  21. Manage -> Add Roles and Features -> Role based features installation -> Select the server from the pool -> Click on the server name one at a time -> Next.
  22. From Features selection page, Select Failover Clustering -> Next.
  23. For Failover Clustering feature addition, reboot is not required. So just click on Install.
  24. Once installation done on all nodes, Open Failover Cluster Manager MMC.
  25. Right Click on Failover Cluster Manager and select Create Cluster.
  26. Select the servers you want to add to this cluster.
  27. Run the Validation.
  28. On the first validation test, I noticed errors and warning. View report and go through the errors first.
    • Error which listed from all nodes was
      • Validate Disk Failover

          Description: Validate that a disk can fail over successfully with data intact.
          Start: 8/20/2012 9:26:21 PM.
          Node MSHVCLU1N1.insideactivedirectory.com holds the SCSI PR on Test Disk 0 and brought the disk online, but failed in its attempt to write file data to partition table entry 1. The disk structure is corrupted and unreadable.
    • From Technet Forum, I got an sugggestion to reformat the disk which fixed the issue for me
    • Reran the validation, but this time selecting only the Storage tests and confirmed that this issue is no more existing
    • Another warning which came up on the first validation was for Digital Signature
      • Validate All Drivers Signed

          Description: Validate that tested servers contain only signed drivers.
          Validating that the servers contain only signed drivers...
          The node 'MSHVCLU1N1.insideactivedirectory.com ' has unsigned drivers.
    • Unsigned drivers was the Brocade HBA driver which I installaed and I was confident to ignore this warning
  29. Make sure that you fix all errors and rerun the validation to confirm that its really fixed before proceeding to Cluster creation.
  30. On the next step, Give the cluster name. If your account dont have rights to create computer object, you can use a prestaged object with appropriate security permission assigned prior to the cluster creation.
  31. Once cluster is created, Click on Networks from Failover Cluster Manager.
  32. Network will be grouped based on subnets.
  33. Rename the Network groups if you think that its easy to identify.
  34. For Management Network Group - Enable  "Allow cluster network communication on this network". This will act as a secondary connection for the Heartbeat.
  35. For Heartbeat Network Group - Enable "Allow cluster network communication on this network.
  36. For Heartbeat Network Group - Disable "Allow Clients Connect through this network.
  37. For Live Migration Network Configuration, Go to Fail-over Cluster Manager -> Cluster Name -> Networks. Right Click on "Networks" and chose Live Migration Settings. Select the Networks which should be used for Live Migration.
  38. Click on Failover Cluster Manger -> Storage -> Disks.
  39. Confirm that SAN disk is alread listed out.
  40. Right click on the disk name and select "Add to Cluster Shared Volume".
  41. Go to Computer Management -> Disk Management.
  42. Ensure that SAN Disk is now showing as "Reserved".
  43. Now, its the time of installing HyperV Role on each nodes.
  44. Manage -> Add Roles and Features -> Role based features installation -> Select the server from the pool -> Click on the server name one at a time -> Next.
  45. On the Server Role selection page, Select Hyper-V and proceed with installation.
  46. Select the Team Network interface which will be used for the HyperV Virtual Switch.
  47. Reboot is required for Hyper-V Role installation. So its better to select automatic reboot along with the installation.
  48. Configure Priority for the Network Adapters as mentioned on Step 10 with vEthernet (Microsoft Network Adapter Multiplexor Driver - Virtual Switch) as the least priority.
  49. Once Hyper-V role is installed, On each server Set the default location of VHD and Virtual Machines to C:\ClusterStorage\Volume (CSV Volume)
  50. Create a Virtual Machine from any of the node.
  51. After installation, Go to Failover Cluster Manager -> Roles .
  52. Right Click and Select Configure Role.
  53. Select Virtual Machine as the role and click Next.
  54. Select the Virtual Machines which require high availability and proceed with confirmation.
  55. Test Live Migration and Quick Migration between the nodes.
  56. My 3 Node HyperV Cluster works fine with the above configuration.

I am sure this is only a base line document. Please amend this document if required.

Good luck
Shaba
insidevirtualization.com