Hyper-V 3.0 VM stuck in "Stopping" state RRS feed

  • Question

  • We are running Hyper-V on a 2012 server and for the 20nd time in as many months I've logged into a VM and restarted or migrated to another host it but it's stuck in the "stopping" state in Hyper-V 3.0 and doesn't give me any Turn Off, Shutdown, Save, Pause or Reset options.  The only thing I've been able to do is to schedule a time to reboot the Hyper-V server.  Has anyone else experienced this?  Is there any command I can run get the VM  top turn off  w/o having to reboot or restart the Hyper-V service?

    Our host environment as below:

    HP C7000 Blade Chasis

    16x ProLiant BL460c Gen8 Server (Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz (8 Cores) - 256GB Ram)

    Blades server have Hp FlexFabric 10Gb 2-port

    2x HP VC Flex-10 Enet Module on Chassie

    HP VC 8Gb 20-Port FC Module on Chassie

    Also We turned off tcp checksum offloading but it didn't work.

    We've opened case to HP support but they analyzing our system and didn't find any problem to themself.

    Does anybody have any fixes for these problems?

    Wednesday, January 30, 2013 11:51 AM

All replies

  • I have successfully got a VM to reboot when killing the virtual machine worker process, right click in the task manager on the process and press End Task

    Have you tried that?

    It does not sound like it should be anything with the hardware!

    If you have like 50 VM´s on the host you can find out what Guid the vmsp has in the command line and compare it to the right VM so you do not kill the wrong VM :-) right click on the columns and add the command line 

    And to check the VM

    Then when you know that is the right VM right click and end task

    • Edited by vNiklasMVP Wednesday, January 30, 2013 12:22 PM
    • Proposed as answer by JESUSRICO1 Tuesday, July 21, 2015 10:13 PM
    Wednesday, January 30, 2013 12:07 PM
  • We already tried and it didnt any work! Process is stucked and not killing any way by terminator tools of process hacker (kernel NtTerminateProcess etc...) We are find out main reason of this issue. When we had done live migrated, rebooted, shutdown, quick migrated, guest vm's is stopping state. this issue it doesn't any time occurs and don't same vm's and it occur random time, random vm,random host and os.

    Best Regards,

    Wednesday, January 30, 2013 12:19 PM
  • we are looking for the source of the problem why getting "stopping" state...

    However, It automatically moving to another host vm machine which stopping state when If I killed Clussvc.exe.
    • Edited by mutluozel Wednesday, January 30, 2013 12:46 PM
    Wednesday, January 30, 2013 12:45 PM
  • Have you opened a support case at microsoft? maybe time to do that.. 

    Does the eventlogs say anything worth to dig into? 

    Wednesday, January 30, 2013 1:24 PM
  • we did it already analyzed and opened the case, but we are waiting... and finding solution!

    Wednesday, January 30, 2013 1:53 PM
  • mtlozl any update for this case??

    I have some problem her..

    Thursday, March 14, 2013 3:43 PM
  • Having the same issue, any updates would be appreciated..

    Found the following hot fix from MS, I am in the process of rotating our servers down to apply it.

    "Virtual machines freeze at the "Stopping" state after you shut down the virtual machines in Windows Server 2012" http://support.microsoft.com/kb/2823643

    • Proposed as answer by Phil Barclay Thursday, May 16, 2013 2:02 AM
    • Edited by Phil Barclay Thursday, May 16, 2013 2:03 AM
    Monday, May 13, 2013 4:06 PM
  • Having the same issue, any updates would be appreciated..

    Found the following hot fix from MS, I am in the process of rotating our servers down to apply it.

    "Virtual machines freeze at the "Stopping" state after you shut down the virtual machines in Windows Server 2012" http://support.microsoft.com/kb/2823643

    Hi there,

    We are currently experiencing the exact same issue.

    Does the fix mentioned above fixed your issue ?


    Saturday, January 18, 2014 2:29 AM
  • I am facing the same issue but on Win 2012 R2 Hyper V Cluster. One of the VM is stuck "Status Stopping - In Service Locked"

    The kb2823643 states that it is for Win 2012.

    Is there a hot fix for this issue on Win 2012 R2 Hyper V

    Thursday, March 6, 2014 7:12 AM
  • Having the exact same issue here too on a Hyper V Cluster on Windows Server 2012 R2 (DataCenter edition).

    Any fix???? That article doesn't apply, as CosmicStorm suggested

    Thursday, April 17, 2014 8:10 PM
  • Hi,

    I experienced this problem at Customer.

    You have to update all firmware and driver for HP Blade.

    You have to check network configuration for hp , switches , Windows teaming.

    You must update Hyper-V 2012. You can control update list  form this site  http://blogs.technet.com/b/askcore/archive/2013/03/05/looking-for-windows-server-2012-clustering-and-hyper-v-hotfixes.aspx


    Friday, April 18, 2014 8:31 AM
  • Hi Mutluozel,

    We have few C7000 enclosures with BL460G7 blades.

    I had observed this issue or may be a similar issue many times.

    There is a known issue with HP Blades / Eumlex driver and Windows server 2012 R2 related with VMQ. Due to this, my VMs randomly loose network connectivity. Though the VM says that its connected and the network interface shows the link, the traffic flow will not happen. When ever I had this issue - the first fix I used to try was to move the VM. And the VM will be stuck at that point without giving me any other options other than rebooting the entire node.

    Off-late, I was playing around on this issue and what I observed is that if a VM is stuck  this point, initiating the movement of a second VM from the same server will move both together. I dont have any clue on this logic and this worked for me all the time. Now I have a better way to fix the network disconnect and hence I am not using this technique for the last few weeks - however you could try this with a dummy VM.

    As you are also on HP, it may be a good idea to check if its a VMQ related issue.

    I have mentioned about this network disconnect issue, related events and the fix in my blog.


    VMQ issue - http://www.hyper-v.nu/archives/mvaneijk/2013/11/vnics-and-vms-loose-connectivity-at-random-on-windows-server-2012-r2/

    Good luck !


    Optimism is the faith that leads to achievement. Nothing can be done without hope and confidence.


    Friday, April 18, 2014 3:12 PM
  • I was facing the same issue. My Hyper-V VM which was running on Windows Server 2012 got stuck when i tried to restart the VM. 

    Solution : Kill the VM process or Restart the Hyper-V Host.

    Thursday, August 28, 2014 10:54 AM
  • Remote access may not have been configured properly, try switching the service off - should work like magic
    Monday, September 29, 2014 12:44 PM
  • Excellent post, solved my problem.
    Thursday, April 16, 2015 7:14 PM
  • You are the man! Thanks a lot.
    Monday, April 18, 2016 11:14 AM
  • What exactly did solve the issue? I have tried killing that does not work. Rebooting is not really something you want to do often.
    Thursday, June 9, 2016 8:58 AM
  • Every single time I have seen this it has been NIC driver related.

    There was a known issue on QLE8262 cards where upon VMQ allocate\de-allocate the driver would get stuck in a race condition causing some VMs to lose connectivity, others to get stuck in a stopping state. the only option was a bounce of the hypervisor. as a temporary workaround we could disable VMQ completely (and take the associated performance hit) we then worked with the hardware vendor debugging the driver until the section of code was isolated, they then released a private driver for us (we are quite a big customer). several months later the fix was rolled out in their latest driver.

    There are countless posts about this sort of thing on the internet across multiple NIC vendors. If disabling VMQ stops the issue from occurring raise a support case with your hardware vendor. 

    This posting is provided "AS IS" with no warranties, and confers no rights. Please remember, if you see a post that helped you please click "Vote as Helpful", and if it answered your question, please click "Mark as Answer". I do not work for Microsoft, I manage a large estate in the private sector, my views are generally first hand production experiences.

    Friday, June 10, 2016 9:58 AM
  • In this case it was a storage failure. Appearantly the storage was reporting the disk missing until we completely shutdown the host and the attached JBOD. After booting it started repairing the RAID using a hot spare.
    Friday, June 10, 2016 2:25 PM
  • I'm facing very similar issue on 2 node W2012 R2 DC Cluster with iSCSI CSV. Live migration hangs the VM and nothing except host restart cant fix it. For what i was able to test (production env) some machines are able to migrate both sides and some only to one side. It started to happen after last update run (13.072016) while the machines where supposed to fail back (so both nodes got the update already). Below is the list of update.


    The previous update run at 08.07.2016 was successful and all the VM where able to Live Migrate both sides. I have looked thrue all possible logs and yeas there is lots of info error and warning but it does not make any sense or point to single problem. Any help would be greatly appreciated.

    • Edited by RougeBeta Thursday, July 14, 2016 1:47 PM
    Thursday, July 14, 2016 1:46 PM
  • I did manage to resolve the issue on my own. The issue was a single update that was responsible for the LM deadlock. It was KB3161606. Installing KB3172614 made the problem go away.

    Monday, September 5, 2016 12:32 PM
  • Facing the same problem here with Server 2016, HPE BL460c Gen8 with 554FLB adapters.

    I thought VMQ is supported now?

    Tuesday, February 13, 2018 5:48 PM
  • We are experiencing the same issue. They always appear to get stuck at 84% when they are live migrated. Here is what I know:

    • Happening on Gen8 and Gen9 HPE BL460c
    • Happening with or without the HPE WBEM providers (which has been known in the past to wipe out WMI namespace for Hyper-V)
    • You can't kill the vmwp.exe processes
    • You can't stop the Hyper-V Management Service
    • Support from Microsoft claims it WMI but offers no proof and just wants to reimport MOF files and reboot. Basically they don't want to resolve the issue or dig down and determine what the hell it is
    • We use HPE c7000 chassis with FlexFabric modules and have 2 Management NICs per host and 2 NICs for guest traffic. Guest traffic is HyperVPort
    • We have VMQ enabled and I also configured them to not conflict on the NIC's (balanced the VMQ across the nics)
    • We are not running AV on the hosts
    • We do use SCVMM 2016

    • Edited by Quadrantids Saturday, February 17, 2018 6:00 PM
    Saturday, February 17, 2018 5:58 PM
  • Remote access may not have been configured properly, try switching the service off - should work like magic

    Thanks @tundama, that solved it!

    We had an interface that probably messed something up in RRAS (found the interface with eventmgr). As soon as we disabled RRAS and rebooted everything went back to normal. I found the root cause in another forum that explained that a deadlock might occur during shutdown (of a WM) when RRAS is enabled and has some kind of internal network conflict.

    Wednesday, February 21, 2018 1:12 PM
  • Hello,

     solved my problem !!!

    Thursday, July 19, 2018 8:47 AM