Migration fails between 2 clusters


  • Hi there,

    I wonder can anyone help me with an issue I am having when I try to migrate some VMs from one of our clusters to another using Virtual Machine Manager 2016.

    The blades are all Windows Server 2016 and both are 2016 clusters are available in VMM so I’m using the Migrate Virtual Machine option then selecting the destination host on the other cluster, followed by selecting the CSV that I’d like the VM to go into.  The migration starts and gets to 33 % all the time and then failed giving the error below.  I’ve looked at the details of the job within VMM and it seems to fail at step 1.3, which is “Staged migrate virtual machine to create a planned VM on the target host”

    Some VMs do move without a problem to some blades in the other cluster so I’m not sure why it fails when it tries to move VMs to some blades and not others so any help would be greatly appreciated as I’d like to avoid to have to shut the VMs down and manually copy over the vhdx files to the other cluster and import them as I want to avoid downtime if I can.

    Thanks in anticipation,


    Error (12700)

    VMM cannot complete the host operation on the' server because of the error: Virtual machine migration operation for 'HOST1V' failed at migration source hv05'. (Virtual machine ID 221F41B4-D270-40B8-A40E-0A6F1B5D4DDE)

    The Virtual Machine Management Service failed to establish a connection for a Virtual Machine migration with host '': The target principal name is incorrect. (0x80090322).

    The Virtual Machine Management Service failed to authenticate the connection for a Virtual Machine migration at the source host: The target principal name is incorrect. (0x80090322).

    Unknown error (0x8000)

    Recommended Action

    Resolve the host issue and then try the operation again.

    Friday, May 12, 2017 2:54 PM

All replies

  • Hello Bonemister,

    If you are using Kerberos for authentication protocol for live migration, please check the SPN for the target Hyper-V host by runing the following command on domain controller.

    setspn.exe -L <hyper-v host name>
    Following is the output of the command, please make sure the SPN for migration service has been registered.

    PS C:\Users\Administrator> setspn.exe -L host03

    Registered ServicePrincipalNames for CN=HOST03,CN=Computers,DC=sdn,DC=lab:        Hyper-V Replica Service/HOST03        Hyper-V Replica Service/host03.sdn.lab        Microsoft Virtual System Migration Service/HOST03 Microsoft Virtual System Migration Service/host03.sdn.lab Microsoft Virtual Console Service/HOST03 Microsoft Virtual Console Service/host03.sdn.lab WSMAN/host03        WSMAN/host03.sdn.lab        TERMSRV/HOST03        TERMSRV/host03.sdn.lab        RestrictedKrbHost/HOST03        HOST/HOST03        RestrictedKrbHost/host03.sdn.lab        HOST/host03.sdn.lab

    Best regards
    Andy Liu

    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact

    Wednesday, May 17, 2017 4:28 AM
  • Hi Andy,

    Thanks very much for your reply, it was usuful!  It does indeed list Microsoft Virtual System Migration Service as a SPN so I'm still trying to figure out why some migrations fail and some work so any further ideas by anyone would be greatful!



    Monday, May 22, 2017 9:54 AM
  • What happens if you try to shutdown the VM and migrate it still using VMM?
    Monday, May 22, 2017 12:58 PM
  • Hi Willy,

    Thanks for replying!

    I've just got some downtime approved for some of the servers that failed to live-migrate so I'll turn them off next week and try and migrate them using VMM again when they are off and see what happens.

    Fingers crossed that works but I'd really like to get to the bottom of why some failed to live-migrate when about 50 VMs have migrated over without a problem.



    Friday, May 26, 2017 1:02 PM
  • Hi Bonemister,

    Did you ever fix this? I'm bumping into the same issue here.



    Friday, April 6, 2018 12:31 PM
  • Hi Herman,

    Yes, managed to get to the bottom of the problem thankfully! The issue for me was a setting within Virtual Machine Manager.  If you expand out your clusters and then right click on one of the blades that the live migration fails on and select Properties.  Under Migration Settings I clicked the option for Incoming live migration settings at the bottom then click Use any available network or if you want to lock it down more specifically enter in the subnets that you wish to allow migrations on. 

    Also under Virtual Switches make sure each blade has the same Virtual Switches setup as if one of the blades doesn’t have the correct virtual switch it won’t be able to migrate the VMs.

    Hope this helps!



    Monday, April 9, 2018 12:16 PM