none
Hyper-V failover cluster attempts to prime kdc cache to decommissioned primary domain controller

    Question

  • Hello, I have a small issue on a Hyper-V failover cluster.

    When live migrating a virtual machine there is a noticeable delay that was not there before. When analysing the clusterlog it appears that there is a remaining record pointing to the old PDC: INFO  [RES] Network Name: [NNLIB] Priming local KDC cache to \\DC03.domain01.com for domain label domain01 . DC03 however has been correctly demoted and removed from the cluster completely. There are also no DNS records remaining that point to DC03.

    Can somebody please tell me if I'm missing something or where the clusternodes could be getting the record pointing to the decommissioned primary domain controller? The new primary domain controller is working like it should, everything else is happy except the "local kdc cache" it seems yet this does not prevent the live migration, it just delays it.

    Thanks in advance!


    Friday, February 17, 2017 7:12 PM

Answers

  • Hi Sir,

    Have you removed SRV records of that old DC ?

    Have you restarted the cluster ?

    Best Regards,

    Elton


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    • Marked as answer by GeoffD_ Monday, March 20, 2017 2:05 PM
    Monday, February 20, 2017 9:08 AM
    Moderator
  • I haven't had this happen before, but I am wondering if there is a less intrusive method.

    It is a very simple matter to change the VIP address of the cluster.  During this process, the cluster automatically 'restarts' things on the new VIP.  This might force the cluster to clear itself of old data.

    Again, I have not tried this, but changing the VIP is really simple and non-impacting to anything happening on the cluster.  Yes, access to management via VIP will be temporarily disrupted, but even that isn't critical.  And, you can quickly change it back to the original VIP so that the test VIP is there for only a few seconds.  I would perform this change from one of the nodes of the cluster so there is no network loss of access to the cluster VIP while making the changes.


    . : | : . : | : . tim

    • Marked as answer by GeoffD_ Monday, March 20, 2017 2:05 PM
    Tuesday, March 07, 2017 8:01 PM

All replies

  • Hi Sir,

    Have you removed SRV records of that old DC ?

    Have you restarted the cluster ?

    Best Regards,

    Elton


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    • Marked as answer by GeoffD_ Monday, March 20, 2017 2:05 PM
    Monday, February 20, 2017 9:08 AM
    Moderator
  • Hello Elton,

    Yes the SRV records have been removed but the cluster has not been restarted yet.

    Is it alright to reboot the nodes one by one after migrating the machines or will this require a simultaneous reboot of the nodes? 

    Thanks for your input! 

    Tuesday, March 07, 2017 7:35 PM
  • I haven't had this happen before, but I am wondering if there is a less intrusive method.

    It is a very simple matter to change the VIP address of the cluster.  During this process, the cluster automatically 'restarts' things on the new VIP.  This might force the cluster to clear itself of old data.

    Again, I have not tried this, but changing the VIP is really simple and non-impacting to anything happening on the cluster.  Yes, access to management via VIP will be temporarily disrupted, but even that isn't critical.  And, you can quickly change it back to the original VIP so that the test VIP is there for only a few seconds.  I would perform this change from one of the nodes of the cluster so there is no network loss of access to the cluster VIP while making the changes.


    . : | : . : | : . tim

    • Marked as answer by GeoffD_ Monday, March 20, 2017 2:05 PM
    Tuesday, March 07, 2017 8:01 PM
  • Hello Tim,

    Thanks for your advice, I decided to re-test today before attempting the fix and it seems to have cleared itself up over time. 

    Thanks for your answer anyway!

    Cheers,

    Geoffrey

    Monday, March 20, 2017 2:05 PM