locked
Basic Failover Clustering question RRS feed

  • Question

  • Hi,

    Consider the following Failover Cluster:

    • 2 cluster nodes running Windows Server 2008/2008 R2 with Hyper-V
    • The cluster quorom configuration is Node and Disk Majority
    • 2 cluster networks is configured, 1 for heartbeat and 1 for client communication (management)
    • 2 Virtual Hyper-V Machines is running as clustered resources, 1 VM running on each cluster node

    Then consider the fact that both of the cluster networks fails, while both nodes has access to the disk witness (SAN-disk). What would happen to the cluster operational status? How does the cluster decide which node to continue running the cluster resources (1 VM would go offline and then online on the surviving node?)

    Friday, November 11, 2011 7:14 PM

Answers

  • Okay.

    In your case, we have a total of three votes (1 node + 1 node + 1 witness disk).

    You have to knwo (and this is the key) that in a given time, the witness disk is owned by one of Nodes. (Go to Failover Cluster Manager console, Storage then highlight the quorum disk, you will see the owner node name)

    So when the herat beat network is broken, the node who  will hold cluster resources is the node who owns the witness disk.

    You have to know in addition that witness disk is a HA cluster ressource. So when, a node (owning the witness disk) loose connectivity to this disk, the disk will failover to the other node, and this node will be it's new owner.


    Regards, Samir Farhat Infrastructure Consultant
    • Proposed as answer by SAMIR FARHATMVP Sunday, November 13, 2011 12:03 PM
    • Marked as answer by scripter42 Sunday, November 13, 2011 12:43 PM
    Sunday, November 13, 2011 12:23 AM

All replies

  • Okay.

    In your case, we have a total of three votes (1 node + 1 node + 1 witness disk).

    You have to knwo (and this is the key) that in a given time, the witness disk is owned by one of Nodes. (Go to Failover Cluster Manager console, Storage then highlight the quorum disk, you will see the owner node name)

    So when the herat beat network is broken, the node who  will hold cluster resources is the node who owns the witness disk.

    You have to know in addition that witness disk is a HA cluster ressource. So when, a node (owning the witness disk) loose connectivity to this disk, the disk will failover to the other node, and this node will be it's new owner.


    Regards, Samir Farhat Infrastructure Consultant
    • Proposed as answer by SAMIR FARHATMVP Sunday, November 13, 2011 12:03 PM
    • Marked as answer by scripter42 Sunday, November 13, 2011 12:43 PM
    Sunday, November 13, 2011 12:23 AM
  • Thanks for your reply. So in my given scenario the node owning the disk witness will be the surviving node, and the VM which reside on the other node will fail and be brought online on the surviving node. When the cluster network come online after some time, the cluster service on the failed node must be manually started I suppose?

    Sunday, November 13, 2011 9:35 AM
  • No, the failing nod will always listen for the other resources (Node or disk), end when all conditions are suitable, it will rejoin the cluster and hold cluster resources. But keep in mind that the failing VMs will not migrate to this node automatically unless you have configured preferable owner or Virtual machine manager PRO.


    Regards, Samir Farhat Infrastructure Consultant
    Sunday, November 13, 2011 12:01 PM
  • I see, thanks for your assistance.
    Any books or other good literature you would recommend in regards to Failover Clustering in general? E.g. to understand the details of the Resource Hosting Subsystem and the scenarios where it might be appropriate to manually adjusting the policies for resource failover and so on.
    Sunday, November 13, 2011 12:43 PM
  • Look at this link: You will find all your responses:

    http://social.technet.microsoft.com/wiki/contents/articles/125.aspx#Planning

     

     


    Regards, Samir Farhat Infrastructure Consultant
    Sunday, November 13, 2011 12:48 PM
  • Thanks!
    Sunday, November 13, 2011 12:58 PM