none
Unable to create 2-Node Cluster - Timeout Server 2012

    Question

  • Hello Everyone,

    I have 2-2012 servers that I'm trying to setup clustering with. Individually they can create their own 1 node cluster no problem, adding to an existing cluster doesn't seem to work either. If I destroy one cluster and try to add it to the other existing cluster it fails with timeout. I use the Validate a Confirguration Wizard it says everything checks out successfully (warning on Storage and Network for minor stuff), then fails once it tries to create the cluster with the following:

    Beginning to configure the cluster SERVICES.
    Initializing Cluster SERVICES.
    Validating cluster state on node SERVER1.
    Searching the domain for computer object 'SERVICES'.
    Creating a new computer account (object) for 'SERVICES' in the domain.
    Configuring computer object 'SERVICES in organizational unit OU=Servers,DC=xxxxxx,DC=xxxxx' as cluster name object.
    Validating installation of the Network FT Driver on node SERVER1.
    Validating installation of the Cluster Disk Driver on node SERVER1.
    Configuring Cluster Service on node SERVER1.
    Validating installation of the Network FT Driver on node SERVER2.
    Validating installation of the Cluster Disk Driver on node SERVER2.
    Configuring Cluster Service on node SERVER2.
    Waiting for notification that Cluster service on node SERVER2 has started.
    Forming cluster 'SERVICES'.
    Unable to successfully cleanup.
    An error occurred while creating the cluster and the nodes will be cleaned up. Please wait...
    An error occurred while creating the cluster and the nodes will be cleaned up. Please wait...
    There was an error cleaning up the cluster nodes. Use Clear-ClusterNode to manually clean up the nodes.
    There was an error cleaning up the cluster nodes. Use Clear-ClusterNode to manually clean up the nodes.
    An error occurred while creating the cluster.
    An error occurred creating cluster 'SERVICES'.

    This operation returned because the timeout period expired
    To troubleshoot cluster creation problems, run the Validate a Configuration wizard on the servers you want to cluster.

    Tried going into ACU, then giving full access under Security to the computer accounts by prestaging a cluster name as well. That didn't seem to help even after it enabled the disabled computer account when forming the cluster. I can see if I try to join one to an existing node that it joins, never comes "UP" then gets evicted after a timeout.

    Suggestions on where to go next would be much appriciated. It's intersting to me since I've setup multiple clusters on 2008/R2 and am currently running clustered Hyper V 2012 servers as the hosts for these without any issues, so right now 2 clusters are in the environment without any problems. Started a debug log during the creating of the Failover Cluster, didn't see anything that caught my attention as useful information. Not quite sure where to go next.


    • Edited by Jay Kriv Saturday, January 5, 2013 4:54 AM
    Saturday, January 5, 2013 4:53 AM

Answers

  • Steve, don't add any additional parameter to the command, as it should be applied to both ipv4 and ipv6.

    on the Hyper-V host machines, run the following powershell:

    Get-NetAdapter –name XXXX   | disable-NetAdapterChecksumOffload

    Where XXXX is the name of the Physical nic’s part of the team and the Teamed NIC itself. (this is the teamed nic that binds to the virtual switch to which the vm’s are connected to).

    Alternatively we can edit the NIC properties to turn of the checksum offload on each adapter involved.

    Restart the machine to take effect.  

    Monday, May 13, 2013 2:28 AM

All replies

  • **Update**

    Tried a full format and re-install of Windows Server 2012 Standard from DVD, added only the Failover Cluster role. Joined to domain as different names (SERVER3 and SERVER4), ran Validation Wizard which was successful, same exact failure trying to create SERVICES cluster. Tried using a new cluster service name "SERVICE" that failed as well because of Timeout.


    Saturday, January 5, 2013 7:12 PM
  • Try creating the same cluster using a single node instead of both and see if that works. If so, I'd recommend review the following for further tips:

    http://blogs.technet.com/b/askcore/archive/2011/05/31/cluster-installation-time-out-issues.aspx


    Visit my blog about multi-site clustering

    • Marked as answer by Lawrence,Moderator Wednesday, January 16, 2013 8:01 AM
    • Unmarked as answer by Jay Kriv Wednesday, January 16, 2013 8:53 PM
    Sunday, January 6, 2013 12:11 AM
    Moderator
  • I am able to create single node clusters fine. No duplicate name on network. Did see "graceful" in the log, but there is no Antivirus installed and Windows Firewall is disabled.
    Tuesday, January 8, 2013 4:09 AM
  •  I use the Validate a Confirguration Wizard it says everything checks out successfully (warning on Storage and Network for minor stuff),

    Maybe it is not so minor.  What are the warnings?  Do you have the latest 2012 drivers for storage and networking?  What sort fo storage are you using?  Configuration?  Are the hosts connected to the same networking switch?

    No specific questions; just trying to get an idea of the configuration to see if anything jumps out as 'different'.


    tim

    Wednesday, January 9, 2013 2:31 PM
  • You might need to engage PSS to troubleshoot this further.

    Visit my blog about multi-site clustering

    Thursday, January 10, 2013 3:11 AM
    Moderator
  •  I use the Validate a Confirguration Wizard it says everything checks out successfully (warning on Storage and Network for minor stuff),

    Maybe it is not so minor.  What are the warnings?  Do you have the latest 2012 drivers for storage and networking?  What sort fo storage are you using?  Configuration?  Are the hosts connected to the same networking switch?

    No specific questions; just trying to get an idea of the configuration to see if anything jumps out as 'different'.


    tim

    Hi Tim,

    I will rerun the Validate Wizard again and post results tomorrow, have to remove them from the current individual single node clusters.

    I have the lastest drivers far as I know for Hyper-V guest hardware through Microsoft. I have not attempted to use central storage yet to rule out issues with the OpenNas I use for the current Hyper-V 2012 and Windows 2008 R2 clusters. Hyper-V hosts are connected to the same switch.

    Currently I have a cluster of 2-node Hyper-V 2012 servers with a couple of HA virtuals, on each of those Hyper-V nodes is one of these virtuals I'm trying to cluster together. One virtual on each node already has a 2008 R2 server clustered. 

    Thursday, January 10, 2013 6:28 AM
  • Currently I have a cluster of 2-node Hyper-V 2012 servers with a couple of HA virtuals, on each of those Hyper-V nodes is one of these virtuals I'm trying to cluster together. One virtual on each node already has a 2008 R2 server clustered. 

    So you currently have created a physical cluster with two Microsoft Hyper-V Server hosts and that is working fine.  The place you are having problems is with clustering a pair of VMs, VMs that reside on different nodes of the physical cluster?

    .:|:.:|:. tim

    Thursday, January 10, 2013 7:41 PM
  • Yes I do have one physical cluster working, HV1 and HV2 make VirtualCluster.

    Yes HV1 has a virtual Server1 and HV2 has a virtual Server2. They are not clustering together properly.

    HV1 also has 2k8Server1, and HV2 has 2k8Server2 that cluster together properly though.


    • Edited by Jay Kriv Wednesday, January 16, 2013 8:55 PM
    Wednesday, January 16, 2013 8:54 PM
  • Hi Jay,

    If I've read your post correctly, we're having the same issue.

    I have 2 physical boxes running Windows Server 2012 Datacenter, configured with node and disk majority.

    On this cluster I have the hyper-v role installed, and I have created 2 VMs, one hosted on each physical node.

    I then tried to create a cluster using the two VMs, and hit this problem.

    I haven't found the solution, and there are no errors in the logs, it just times out.

    I do seem to have worked round the issue though, but move one VM to physical node 1 (so that both VMs are on the same node), create the cluster (which completes successfully this time) and then move the VM back to it's original physical node.

    I'm sure someone will give a more technical explanation to this problem at some point, but in the meantime, I hope that helps.

    Marc

    Friday, January 18, 2013 7:46 PM
  • Hi Jay,

    If I've read your post correctly, we're having the same issue.

    I have 2 physical boxes running Windows Server 2012 Datacenter, configured with node and disk majority.

    On this cluster I have the hyper-v role installed, and I have created 2 VMs, one hosted on each physical node.

    I then tried to create a cluster using the two VMs, and hit this problem.

    I haven't found the solution, and there are no errors in the logs, it just times out.

    I do seem to have worked round the issue though, but move one VM to physical node 1 (so that both VMs are on the same node), create the cluster (which completes successfully this time) and then move the VM back to it's original physical node.

    I'm sure someone will give a more technical explanation to this problem at some point, but in the meantime, I hope that helps.

    Marc


    Spoke too soon.....moving the VM back to the other node has killed the vm cluster.....I'll keep trying! :)
    Friday, January 18, 2013 7:47 PM
  • Are you using only external network switches on the VMs, or do you have some internal virtual switches?

    .:|:.:|:. tim

    Saturday, January 19, 2013 7:52 PM
  • Hi Jay,

    If I've read your post correctly, we're having the same issue.

    I have 2 physical boxes running Windows Server 2012 Datacenter, configured with node and disk majority.

    On this cluster I have the hyper-v role installed, and I have created 2 VMs, one hosted on each physical node.

    I then tried to create a cluster using the two VMs, and hit this problem.

    I haven't found the solution, and there are no errors in the logs, it just times out.

    I do seem to have worked round the issue though, but move one VM to physical node 1 (so that both VMs are on the same node), create the cluster (which completes successfully this time) and then move the VM back to it's original physical node.

    I'm sure someone will give a more technical explanation to this problem at some point, but in the meantime, I hope that helps.

    Marc


    Spoke too soon.....moving the VM back to the other node has killed the vm cluster.....I'll keep trying! :)

    Hi Marc,
    I got the same behavior.

    Tested creating a Cluster while SERVER2 was on HV02, failed.
    Migrated SERVER2 to HV01 (which currently has SERVER1)
    Tested creating a Cluster while SERVER2 was on HV01 with SERVER1, creation successful finally! YAY... Great tip, thanks for the info.

    No use to me to have a cluster on the same physical server, so used Live Migration and moved SERVER2 machine back to HV02. Cluster breaks, shows SERVER1 node status Down. SERVER2 node status Up.


    • Edited by Jay Kriv Sunday, January 20, 2013 6:31 PM
    Sunday, January 20, 2013 6:30 PM
  • Are you using only external network switches on the VMs, or do you have some internal virtual switches?

    .:|:.:|:. tim

    Hi Tim,
    Both physical servers are plugged into the same physical switch, no special routing or virtual switches setup. Currently both HV01 and HV02 are setup with one single "External Network" which seems to work for the 2008 R2 Cluster no problem. Other services like DFS replication between the virtuals haven't any issues with communication.

    Right now since I had the Cluster working, then moving to HV02 broke it I'm migrating it back to HV02 to see if it magically fixes it once it's on the same physical server again.




    • Edited by Jay Kriv Sunday, January 20, 2013 6:36 PM
    Sunday, January 20, 2013 6:31 PM
  • Jay,

    Are you sure you have a guest cluster running Windows 2008 R2?  As in, you're trying to acheive exactly the same thing using 2012?

    Only reason why I ask is I gave up on 2012 and tried using 2008 R2 and I'm hitting exactly the same issues. I'm now wondering if guest clustering across different hosts is supported at all? Anyone else have any experience of this? Certainly guest clustering on the same host works in both 2008R2 AND 2012.

    Kind regards

    Marc

    Monday, January 21, 2013 1:23 PM
  • Jay,

    Are you sure you have a guest cluster running Windows 2008 R2?  As in, you're trying to acheive exactly the same thing using 2012?

    Only reason why I ask is I gave up on 2012 and tried using 2008 R2 and I'm hitting exactly the same issues. I'm now wondering if guest clustering across different hosts is supported at all? Anyone else have any experience of this? Certainly guest clustering on the same host works in both 2008R2 AND 2012.

    Kind regards

    Marc

    Jay,

    I also just noticed that the person who configured DAG on our exchange 2010 environment has both mailbox servers on the same VMWare host, and there is a DRS rule in place to keep both VMs on the same host, so more evidence for this not being possible (or at least, I'm assuming the person hit the same issue).

    Asides from the fact that you've said you've got it work in 2008 R2, I understand your concerns about having both clustered VMs on the same host. It was the same reason I was looking into this, but to be honest, if the underlying physical cluster fails, they'll both failover to the other physical node anyway?

    Please accept my apologies for the 'guess-work' but I'm only just starting to get involved in clustering :)

    Kind regards

    Marc

    Monday, January 21, 2013 1:47 PM
  • Hey Marc,

    Yes I just looked again they are working, but the difference in the R2 servers is that they were migrated from VMWare Hypervisor while already in a cluster with a Quorum disk. Not sure if that makes a difference, but they seem to work right now.

    Thursday, January 24, 2013 3:11 PM
  • I just want to recap the variables:

    • Timeout when running Create Cluster
    • Creating a Guest Cluster (a cluster between VMs)
    • Host OS is Windows Server 2012
    • Fails when VMs are on different hosts
    • Succeeds when VMs are on same host

    One additional question...  Are you using the in-box NIC Teaming on the hosts?

    Also be sure to see this blog, see Step #3 for turning on the verbose debug logging to collect more information:
    http://blogs.msdn.com/b/clustering/archive/2012/05/07/10301709.aspx

    Thanks!
    Elden

    • Proposed as answer by jimbiddle Friday, February 1, 2013 6:37 PM
    • Unproposed as answer by jimbiddle Friday, February 1, 2013 6:37 PM
    Sunday, January 27, 2013 4:13 AM
    Owner
  • Guys, I just received confirmation from MS via email and via our MS techincal rep. Nesting a cluster with hyper-v virtual machines will only work if the vms are on the same host, if you seperate, the cluster will break. Below is the email from MS:

    Hi Jim/Matthew,

    As discussed earlier nested clusters within hyper-v is not supported. It is still under testing. There are no documents as such that tells us about nested cluster within hyper-v is un-supported and the guests vms that are part of the SQL cluster need to reside on the same host or the cluster will break. I would confirm the same with our seniors and get back to you. Will keep you posted on this.

    Thanks and Regards,

    James Biddle


    Friday, February 1, 2013 6:39 PM
  • Jimbiddle, it appears that in the previous email that was sent to you, your rep was going to get confirmation on whether or not nested clusters are supported. But in the second email thread it sounds a bit more like they are continuing to document the issue for troubleshooting. Did you get final confirmation on this regarding nested configuration? I am in the process of creating a Multi-site Failover Cluster and have seen these exact symptoms. I attempted to create the cluster on separate hosts that belong to the same 2012 Hyper-V cluster but failed with the same errors.  When the VMs were brought together, the cluster was created with no issues. I am wondering if this is more of a Hyper-V Virtual Switch issues and less of a nested Fail-Over Cluster issue. Has anyone attempted these procedures on 2 separate Hyper-V hosts that are not part of the same cluster? It seems peculiar that Microsoft's response would be along the lines of unsupported nested clusters when there are no docs indicating that it is not supported. 

    Saturday, February 2, 2013 4:46 PM
  • Elden, would you be able to confirm jimbiddle's comment 2/1, identifying that nested clusters are not currently supported? I currently have a Partner Break/Fix case open regarding this same issue as it is a crucial piece for a POC that I am involved in. 
    Tuesday, February 5, 2013 2:02 PM
  • James is incorrect, it is supported.  If you are having issues creating a Guest Cluster, please open a support case with CSS and they will assist you.

    Thanks!
    Elden

    Thursday, February 21, 2013 4:00 PM
    Owner
  • Guys, I just received confirmation from MS via email and via our MS techincal rep. Nesting a cluster with hyper-v virtual machines will only work if the vms are on the same host, if you seperate, the cluster will break. Below is the email from MS:

    Hi Jim/Matthew,

    As discussed earlier nested clusters within hyper-v is not supported. It is still under testing. There are no documents as such that tells us about nested cluster within hyper-v is un-supported and the guests vms that are part of the SQL cluster need to reside on the same host or the cluster will break. I would confirm the same with our seniors and get back to you. Will keep you posted on this.

    Thanks and Regards,

    James Biddle


    We were told the EXACT same thing last night by MS PSS.
    Wednesday, March 6, 2013 5:11 PM
  • As I said before, Guest Clusters are fully supported on Win2012.  Just because you are having a problem, does NOT mean it is unsupported.  Please work with Microsoft support and they will be able to assist with why things are not working correctly.

    Thanks!
    Elden

    Thursday, March 7, 2013 1:38 AM
    Owner
  • As I said before, Guest Clusters are fully supported on Win2012.  Just because you are having a problem, does NOT mean it is unsupported.  Please work with Microsoft support and they will be able to assist with why things are not working correctly.

    Thanks!
    Elden

    I'm not sure of anything that anything has changed except Windows Updates on my servers, and then all of a sudden the cluster just started to work normally. I went to move the two guest VMs to one server since I was going to do a monthly backup to a USB drive then I noticed the cluster stayed online after moving just one. Did some reboot tests to make sure they'd rejoin automatically, and it worked. Failover Cluster successfully working using two separate physical hosts, one guest on each.

    I have not been doing any additional work in getting the cluster to work on different VMs and it was by accident that I realized it was actually working again.




    • Edited by Jay Kriv Thursday, March 7, 2013 6:48 AM
    Thursday, March 7, 2013 6:46 AM
  • Hi,

    I had the same problem for the creation of a Windows 2012 cluster. Timeout during creation...

    For me, the problem was solved after open the port TCP 464 between the cluster nodes and the DCs (it is used to set/modify the Kerberos Password).

    Good luck !

    Regards,

    Benjobabe 

    • Proposed as answer by Benjobabe Friday, March 8, 2013 1:07 PM
    Friday, March 8, 2013 11:09 AM
  • Just wanted to throw my hat in and say that up until this thread I had been unsuccessful in creating a nested cluster with 2012 Datacenter on top of a Hyper V cluster.  Both server nodes were on different Hyper V hosts, and after reading this thread and moving the two VM's onto the same host, cluster creation was successful.

    brad

    Tuesday, March 12, 2013 4:57 PM
  • Hi,

    I was able to solve it.

    It's not a fresh topic, but I faced the same problem. I had a single network adapter too and I connected to an iSCSI storage. In my case the problem was that I used the same NIC for iSCSI and for cluster communication (as I have one NIC per host). I disconnected the storage and created the cluster without storage. It worked. At this stage I had a cluster without storage, so I connected the storage when I had the cluster and it did the trick.

    Dvijne




    • Edited by Dvijne Thursday, March 21, 2013 6:12 PM
    Thursday, March 21, 2013 6:09 PM
  • Microsoft confirmed I have figured out the answer if you are having an issue when your nested cluster breaks when you move your VM guest to the other host...(Below) 

    A nested cluster heartbeat network has to be setup...

    Meaning not only does the nested HB have to communicate between its nodes (classic HB situation), but the HB also has to communicate to each host or communication is lost.
    how to:
    Present a physical nic to both hosts
    Create a new virtual switch for both hosts pointing to the new nics (this will create virtual nics to add to your vm guests)
    This also creates a virtual nic on both hosts (because the physical nic is being presented to hyperv)
    Add nics to both guests in the nested cluster
    Set IP's on all four nics (hosts and guests)
    Creating a subnetted network between all 4 nics (Ping making sure you can communicate all around)
    Now you will be able to fail one member of your nested cluster to the other host without breaking the nested cluster.
    Communication between the host which does not own the guest is the key.
    Friday, April 19, 2013 11:32 PM
  • So I've got this exact same situation as the rest of you (VM Clustering works on same physical host, doesn't on separate physical hosts).

    I've also got a heartbeat network setup as mentioned above, but still no joy.

    I'm using a converged fabric with a logical switch deployed via SCVMM, 8 Nics teamed behind it with multiple subnets running through it for Data,Cluster/Heartbeat,iSCSI and Live Migration.  Same config on both Hyper-V 2012 servers.

    Anymore ideas?


    My System Center Blog

    Thursday, May 2, 2013 7:41 PM
  • VM cluster on Hyper-V 2012 Cluster is fully supported, no matter if the VM cluster nodes on the same or different hosts;

    And recently we've been dealing with several different cases with quite similiar symptom: when on different hosts the VM cluster network failed or cluster creation failed; and the resolution actually can be varios so i'll list them all:

    1. install the windows 2012 network related hotfixes;

    2. if you have in-box teaming configure, change the teaming settings not using default–LoadBalancingAlgorithm;

    3. Disable-NetAdapterChecksumOffload, run the command to all physical NICs and teamed NICs;
    this turned out to be an issue with the NICs and most cases were with Broadcom NICs, but we also see it works to HP NICs.

    Tuesday, May 7, 2013 12:35 AM
  • Hi Sophia,

    When using the Disable-NetAdapterChecksumOffload command, does it need targeting at just TcpIPv4 or IpIPv4 as well?

    Regards,

    Steve


    My System Center Blog

    Tuesday, May 7, 2013 2:39 PM
  • Just to follow up, I've tried the above solution and it works :)

    1. Servers were already up-to-date with patches

    2. Changed nic team from the default algorithm

    3. ran Disable-NetAdapterChecksumOffload -Name * -TcpIPv4

    Not sure if this only needed running on the physical NIC's or not, but I ran it on everything including the virtual switch.

    I did this out of curiosity with it being a lab environment and with all VM's still running.  Lesson learnt - Don't do it on an operation host!
    The NICs drop network connectivity which is to be expected but more importantly it messed up one of my virtual switches to the stage where Server 2012 though the adapters assigned to the team were awol (message in the NIC Teaming GUI said "adapter not found"), PowerShell knew the NICs were up, but the switch was down and SCVMM thought there were no NICs bound to the logical switch.

    Long story short, dumped the Logical Switch from VMM which brought the Team back operational on Server 2012 but didn't remove the team.  Dumped the team from the server then recreated logical switch and virtual adapters from SCVMM and all fine again.

    Regards,
    Steve


    My System Center Blog

    Tuesday, May 7, 2013 6:28 PM
  • All is not so good :(

    The VM clusters were working, but they've started to randomly evict nodes then bring them back online.  Again, only when on separate hosts.

    Sophia, could you confirm the screenshot below is what you'd expect as far as turning off checksums please?

    Thanks in advance,
    Steve


    My System Center Blog

    Friday, May 10, 2013 7:26 AM
  • Steve, don't add any additional parameter to the command, as it should be applied to both ipv4 and ipv6.

    on the Hyper-V host machines, run the following powershell:

    Get-NetAdapter –name XXXX   | disable-NetAdapterChecksumOffload

    Where XXXX is the name of the Physical nic’s part of the team and the Teamed NIC itself. (this is the teamed nic that binds to the virtual switch to which the vm’s are connected to).

    Alternatively we can edit the NIC properties to turn of the checksum offload on each adapter involved.

    Restart the machine to take effect.  

    Monday, May 13, 2013 2:28 AM
  • Hi sophia_whx,

    I'm running server core so can't GUI tweak the NIC's :(

    I can confirm however that disabling all of the various checksum offloads for all of the NICs works a treat.

    Any reference or information on what the likely performance impact may be to having this disabled?

    Regards,
    Steve


    My System Center Blog

    Tuesday, May 14, 2013 7:08 PM
  • I followed Sophia's advice with my demo cluster and I was able to get the cluster to work after nearly 4 days of haggling with the cluster. I have a mix of virtual and physical servers that I am adding to a SQL 2012 AlwaysOn High Availability Group for proof-of-concept purposes for two of our larger clients. We also have those pesky Broadcom NICs in both the physical server participating in the cluster and the physical Hyper-V host.

    Thank God for those sneaky PowerShell cmdlets!

    Wednesday, May 15, 2013 5:58 PM
  • The problem's name is: "Microsoft Failover Cluster Virtual Adapter Performance Filter"

    The description and solution (rather workaround) is here: http://support.microsoft.com/kb/2872325

    Petr Kasan

    • Proposed as answer by Patrick Schüle Wednesday, November 20, 2013 2:44 PM
    Tuesday, August 6, 2013 1:39 PM
  • First of all. 

    You cannot use cluster name 'Services'

    http://support.microsoft.com/kb/909264 

    Have You tried to change cluster name? 

    BR

    Wednesday, March 26, 2014 4:01 PM
  • Posting in hopes of saving someone else some time:

    Be cautious of using Windows 2012 Server management tools to manage Windows Server 2012 R2 - symptoms are the same: creating a cluster will time out when using 2012 failover cluster manager to setup 2012 R2.

    Mikrodots

    Tuesday, May 20, 2014 1:36 AM
  • I would also advise to check for duplicate Service Principal Names (SPNs) in your domain. In our case, we received the error becuase the SPN for the cluster had somehow already been registered to a different server.

    Run the following command to find all duplicate SPNs in your domain:
    SETSPN -X

    If any of your SQL nodes, or Windows Cluster Name, or SQL Cluster Name is seen in the output, you'll need to remove the suplicate SPNs. For example:

    HOST/SQLCLUSTER1 is registered on these accounts:
            CN=APPSERVER04,OU=Servers,OU=SQL,OU=usa,DC=corp,DC=company,DC=net
            CN=SQLCLUSTER1,OU=Servers,OU=SQL,OU=usa,DC=corp,DC=company,DC=net

    MSServerClusterMgmtAPI/SQLCLUSTER1 is registered on these accounts:
            CN=SQLCLUSTER1,OU=Servers,OU=SQL,OU=usa,DC=corp,DC=company,DC=net
            CN=APPSERVER04,OU=Servers,OU=SQL,OU=usa,DC=corp,DC=company,DC=net

    Run these commands to delete the SPNs fron the old server (preferably on a domain controller - you likely need to be a domain administrator to modify SPNs):
    setspn -D HOST/SQLCLUSTER1 CORP\APPSERVER04$
    setspn -D MSServerClusterMgmtAPI/SQLCLUSTER1 CORP\APPSERVER04$

    Tuesday, August 23, 2016 7:23 PM