none
Warning (13926) Host cluster hypervcluster.mydomain.local was not fully refreshed because not all of the nodes could be contacted.

    Question

  • Here is the scenario.  I have a Windows Server 2008 SP2 server running Hyper-V and SCVMM R2 RTM.  I also have a 2-node Windows Server 2008 R2 Hyper-V cluster configuration.  When adding Hosts to manage in SCVMM I can add the local standalone Hyper-V server just fine.  When I go to add the Cluster, the 2-nodes under the cluster add fine but the cluster name itself fails to add with the following message...

    Warning (13926)

    Host cluster hypervcluster.mydomain.local was not fully refreshed because not all of the nodes could be contacted. Highly available storage and virtual network information reported for this cluster might be inaccurate.

    Recommended Action

    Ensure that all the nodes are online and do not have Not Responding status in Virtual Machine Manager. Then refresh the host cluster again.


    Both cluster nodes are online, two cluster networks are up and running and there is one shared volume in the Available storage.  I just can't seem to manage it through SCVMM R2 RTM although I can manage the cluster just fine from the local cluster administrator.

    I saw this post with a similar problem, but it seems like these hotfixes were all rolled into Windows Server 2008 SP2 and I suppose Windows Server 2008 R2. 

    The firewalls on these systems have all been disabled as well.

    Does anyone have any ideas of what may be going on? 


    David A. Bermingham, Director of Product Management, SteelEye Technology
    Tuesday, August 25, 2009 9:44 PM

Answers

  • Hi everyone,

    I'm surprised that this thread is still unresolved.

    I have the same issue (Warning 13926) in adding a single Hyper-V R2 node cluster in SCVMM R2.

    In fact the problem only appear if you add a one node cluster.

    This is very anoying for people that want to migrate from Hyper-V 2008 to Hyper-V 2008R2 !

    Fortunally, i found a answer on the blog of a fellow Virtual Machine MVP (Aidan Finn)

    http://www.aidanfinn.com/?p=10000

    There is Tow WorkAround

    First WorkAround : Add a second node to your cluster

    But in my case, i don't have a second node (because i have to empty the Hyper-V R1 host befor recycle them in R2)

    But The second workaround have work for me !

    Second WorkAround :

    After get the Warning message, edit the cluster configuration and set the host reserve to "0"

    Restart the SCVMM Service.

    If everything goes well, the host should be correctly added a couple of minute after that.

     

    Here a sample of the Aidan talk with support team.

    “After confirmed this design with our product team, they told me that this feedback has already been reported to them, it’s really a problem, and they are considering to improve it in the future release of VMM, They are aware of this issue and are actively looking into it further. I’m sorry for the inconvenience brought to you now.

    In the current release of VMM, There are also some workarounds you can take to “solve” this issue. One possible workaround that has been identified is to temporarily add a second node to the cluster at which point you are then able to bring it under management by VMM.  While this is neither an ideal or long term solution, it has allowed other customers who have ran into this issue to move forward with their projects. Another workaround that I’ve found that will allow the ‘single node cluster’ scenario to complete is … once the Add Host process completes w/info and has the 13926 warning, change the cluster reserve value from 1 to 0, and then re-cycle the vmmservice .  This should allow the host cluster to be added successfully in the console”.

    Thanks Aidan :)

    • Proposed as answer by Ilya_M Wednesday, October 13, 2010 1:06 PM
    • Marked as answer by David BerminghamMVP Wednesday, October 13, 2010 1:38 PM
    Thursday, June 03, 2010 8:45 AM
  • Looks like there is an issue with a Storage Class resource that is causing the problem.  I'll have to take this up with the SCVMM team.
    David A. Bermingham, Director of Product Management, SteelEye Technology
    Sunday, September 06, 2009 1:47 AM

All replies

  • Can you resolve the cluster name from your workstation? ie ping, nslookup the cluster name?

    Maybe check that you can resolve the cluster name from the vmm server its self.

    HTH

    regs,

    Chucky 

    Thursday, August 27, 2009 7:01 AM
  • Yeah, ping and NSLOOKUP both work just fine.  Any other ideas?
    David A. Bermingham, Director of Product Management, SteelEye Technology
    Thursday, August 27, 2009 1:19 PM
  • Looks like there is an issue with a Storage Class resource that is causing the problem.  I'll have to take this up with the SCVMM team.
    David A. Bermingham, Director of Product Management, SteelEye Technology
    Sunday, September 06, 2009 1:47 AM
  • I'm getting this warning 13926 message. Did you get this resolved?
    Thursday, September 17, 2009 3:01 PM
  • Not yet.  Are you using shared storage or 3rd party replicated storage?
    David A. Bermingham, Director of Product Management, SteelEye Technology
    Thursday, September 17, 2009 3:25 PM
  • Not to Hijack this thread.... Though, I am having the same issue.  Before I get this error message (13926), I receive an error about WinRM

    A Hardware Management error has occurred trying to contact server hypernode1.domain.local.
     (Unknown error (0x80338113))

    Recommended Action
    Check that WinRM is installed and running on server hypernode1.domain.local. For more information use the command "winrm helpmsg hresult".

    After I click refresh host, I then get this...

    Warning (13926)
    Host cluster hypercluster1.frontierus.local was not fully refreshed because not all of the nodes could be contacted. Highly available storage and virtual network information reported for this cluster might be inaccurate. 

    Recommended Action
    Ensure that all the nodes are online and do not have Not Responding status in Virtual Machine Manager. Then refresh the host cluster again.


    If I wait 3-4 minutes after receiving the last message, a refresh works fine, and all VM status updates.

    DNS config, pings, cluster validation, SCSI-3 Persistant Reservations, and all failover cluster management activity work properly while SCVMM2008 R2 RTM balks about these errors.  I'm pulling my hair out.

    When I receive the error about WINRM, I check to ensure that each node is listening.  I ran the WINRM quicksetup command to ensure that the machines are listening on all interfaces even.

    Any Ideas?

    Thanks

    Tuesday, September 22, 2009 3:33 AM
  • Not yet.  Are you using shared storage or 3rd party replicated storage?
    David A. Bermingham, Director of Product Management, SteelEye Technology

    In my case, I'm using two Cluster Shared Volumes hosted on an HP EVA3000, with the proper Host Mode set to HEX 00000004198009A8  per HP Guidelines. My cluster passes Validation with no errors.
    Tuesday, September 22, 2009 3:36 AM
  • Not to Hijack this thread.... Though, I am having the same issue.  Before I get this error message (13926), I receive an error about WinRM

    A Hardware Management error has occurred trying to contact server hypernode1.domain.local.
     (Unknown error (0x80338113))

    Recommended Action
    Check that WinRM is installed and running on server hypernode1.domain.local. For more information use the command "winrm helpmsg hresult".

    After I click refresh host, I then get this...

    Warning (13926)
    Host cluster hypercluster1.frontierus.local was not fully refreshed because not all of the nodes could be contacted. Highly available storage and virtual network information reported for this cluster might be inaccurate. 

    Recommended Action
    Ensure that all the nodes are online and do not have Not Responding status in Virtual Machine Manager. Then refresh the host cluster again.


    If I wait 3-4 minutes after receiving the last message, a refresh works fine, and all VM status updates.

    DNS config, pings, cluster validation, SCSI-3 Persistant Reservations, and all failover cluster management activity work properly while SCVMM2008 R2 RTM balks about these errors.  I'm pulling my hair out.

    When I receive the error about WINRM, I check to ensure that each node is listening.  I ran the WINRM quicksetup command to ensure that the machines are listening on all interfaces even.

    Any Ideas?

    Thanks


    Got exactly the same issue - Refresh-VM fails one minute, next it works just fine. VMM R2 with Hyper-V R2. Connectivity is fine, FW is off/open etc.All
    • Proposed as answer by Karel Beukes Tuesday, October 20, 2009 12:24 PM
    • Edited by Karel Beukes Tuesday, October 20, 2009 12:24 PM typo
    Wednesday, October 07, 2009 4:15 PM
  • I have the same issue installed R2 setup added Hyper Role, installed Cluster, added CSV went to add to SCVMMR2 and it starts to add the cluster and server and bang hit this error.  Right now the custer only has one Node as I need to upgrade my other server is that causing my issue?
    Thursday, October 08, 2009 5:58 PM
  • I have the same issue in SCVMM R2.  Only one node is returning the error in a three node Hyper-V Windows Server 2008 SP2 cluster.  All the virtual machines on the host function as expected.
    Monday, October 12, 2009 2:08 PM
  • Has anything further came of this i have a server 2008 R2 cluster that i cannot add to scvmm 2008 r2.  i wonder does scvmm server have to be installed on server 2008 r2 to manage an r2 cluster?
    Monday, October 19, 2009 1:55 PM
  • So two things : 1) The nodes that can't be added to SCVMM R2 : Had a similar issue. Was Kasperskry AV. Once disabled it joined just fine. 2) The Host/VM refreshes fail one minutes and works the next : So here is what we did and it fixed the problem. a) We upgraded the NIC drivers for the management interface of the host (Broadcom 507c) on Dell R900 (New drivers availible now). b) We also upgraded the firmware for the above mentioned. c) Then we forced the NIC speed to 1Gig full d) Disabled TCP Offloading for IPv4 e) Made sure the "Allow the system to turn of the device to save power option" was disabled. So far so good - No issues yet. Did this on two different clusters, both had the same issues and they seem to be resolved for over a week now. Third cluster is tonight - Hold thumbs ! Thanks
    • Proposed as answer by Gerard Wendling Wednesday, October 21, 2009 11:08 AM
    Tuesday, October 20, 2009 12:30 PM
  • Got same king of errors : 12710 (0x80338104) then 13926. In my case it was du to a GPO configuring restricted groups in the domain. I putted back DOMAIN\VMMSERVER account in local administrators group of managed vmm servers and immediatly refresh worked.

    Hope this helps.

    Gérard

    • Proposed as answer by Gerard Wendling Wednesday, October 21, 2009 11:27 AM
    Wednesday, October 21, 2009 11:12 AM
  • We have been told at a Microsoft Conf that the system running SCVMM R2 needs to be on a OS of Win 2008 R2 in order to see a Win 2008 R2 failover cluster.  I am going to test this but does anyone know for sure?
    Thursday, October 22, 2009 6:54 PM
  • Any new status on this?  I am having the same issue.
    Tuesday, May 25, 2010 3:17 PM
  • I upgraded SCVMM to R2, I added the second node and then found that the Network adapter was named different on the second node.  I renamed it to match the first Node and then it came up.

    Tuesday, May 25, 2010 3:49 PM
  • Hi everyone,

    I'm surprised that this thread is still unresolved.

    I have the same issue (Warning 13926) in adding a single Hyper-V R2 node cluster in SCVMM R2.

    In fact the problem only appear if you add a one node cluster.

    This is very anoying for people that want to migrate from Hyper-V 2008 to Hyper-V 2008R2 !

    Fortunally, i found a answer on the blog of a fellow Virtual Machine MVP (Aidan Finn)

    http://www.aidanfinn.com/?p=10000

    There is Tow WorkAround

    First WorkAround : Add a second node to your cluster

    But in my case, i don't have a second node (because i have to empty the Hyper-V R1 host befor recycle them in R2)

    But The second workaround have work for me !

    Second WorkAround :

    After get the Warning message, edit the cluster configuration and set the host reserve to "0"

    Restart the SCVMM Service.

    If everything goes well, the host should be correctly added a couple of minute after that.

     

    Here a sample of the Aidan talk with support team.

    “After confirmed this design with our product team, they told me that this feedback has already been reported to them, it’s really a problem, and they are considering to improve it in the future release of VMM, They are aware of this issue and are actively looking into it further. I’m sorry for the inconvenience brought to you now.

    In the current release of VMM, There are also some workarounds you can take to “solve” this issue. One possible workaround that has been identified is to temporarily add a second node to the cluster at which point you are then able to bring it under management by VMM.  While this is neither an ideal or long term solution, it has allowed other customers who have ran into this issue to move forward with their projects. Another workaround that I’ve found that will allow the ‘single node cluster’ scenario to complete is … once the Add Host process completes w/info and has the 13926 warning, change the cluster reserve value from 1 to 0, and then re-cycle the vmmservice .  This should allow the host cluster to be added successfully in the console”.

    Thanks Aidan :)

    • Proposed as answer by Ilya_M Wednesday, October 13, 2010 1:06 PM
    • Marked as answer by David BerminghamMVP Wednesday, October 13, 2010 1:38 PM
    Thursday, June 03, 2010 8:45 AM
  • This solved my problem.  I have 3 nodes, added them to a cluster before adding them to VMM.  When I added one of the nodes, it added the whole cluster but, each node in the cluster just sat there with "Adding..." and would never update.

    Configured the cluster by changing the reserve value to 0 from 1 and restarted the VMM service.  I then refreshed each of the nodes and they are fine now.  Will see what happens when I change the reserve back to 1. /crosses fingers


    EDIT: Changing back the reserve to 1 was successful. You have to refresh each node and then the cluster itself. All is well.
    • Edited by cfranklin Wednesday, March 21, 2012 3:37 PM
    Wednesday, March 21, 2012 3:30 PM