none
Hyper-V Manager not able to connect the Local Server RRS feed

  • Question

  • Dear Experts...

    Greetings !!!

    1. Issue Description: 
    Hyper-V Manager not able to connect the Local Server – Shows connecting to Virtual Machine Manager Service state and did not connect. Hyper-V Host shows Host Not Responding in the SCVMM console
    Due to which VDIs were not launching to newly connecting users as just disappearing

    2. Temporary Fix:
    If we restart the server, the issue will be resolved but the current active users gets impacted. 
    Need to power on all the VDIs and ensured all got registered state
    And also the same issue keep on re-occurring frequently and affecting production users

    3. Troubleshooting Steps Taken:
    We tried to restart the service SCVMM and WS-Remote Management service which got restarted. But still the issue persists.
    When we restart the WMI Service, one of its dependent service Hyper-V Virtual Machine Manager (vmms.exe) goes to stopping state and neither stopped nor restarted.
    Tried to kill the vmms.exe process from task manager and via command prompt using the commands but it never destroyed. 
    It says cannot stop the Virtual machine manager service, already another running instance exists, access denied (when kills process from task manager), etc.
    Until the server reboot the issue will not be resolved.

    Thanks in Advance ...


    DevT-MCT

    Tuesday, February 12, 2019 4:31 AM

All replies

  • Since you have SCVMM in the management mix here, you really need to ask in the SCVMM forum.  Though it is still Hyper-V under the covers, SCVMM management adds its capabilities on top of a basic Hyper-V management installation, so you need to deal with things through the SCVMM environment.

    https://social.technet.microsoft.com/Forums/en-US/home?forum=virtualmachinemanager


    tim

    Tuesday, February 12, 2019 3:18 PM
  • Yes, you are correct....For the time being let's forget SCVMM and focus on local Hyper-V Manager. Hyper-V Manager not able to connect the Local Server – Shows connecting to Virtual Machine Manager Service state and not connecting....

    Please guide me from where should I start...


    DevT-MCT

    Tuesday, February 12, 2019 3:30 PM
  • Bypassing your installed management environment can introduce other issues, but if you are willing to do that, okay.

    Check your antivirus that it is excluding all VM files.  If using third-party antivirus, try disabling.

    Check that the user trying to connect is a member of the local Hyper-V Administrators group.

    Have you rebooted the server?

    Is this in a standalone environment or is the machine part of a domain?

    Use your favorite search engine to search for 'Hyper-V Manager not able to connect the Local Server'.  You will find this to be a fairly common issue.  Generally resolved by a configuration change.


    tim

    Tuesday, February 12, 2019 3:53 PM
  • Not sure if it's the same issue but we have something similar happening.  We have two freshly installed Windows 2019 servers that were setup at the same time.  One running development VMs and the other running production VMs.  The development one started not allowing the Hyper-V Manager to connect from itself, the other server, or from Windows 10.  I oddly get no errors at all when I try to re-add the server in the manager through the GUI.  Right click Hyper-V Manager and choose connect to server, try Local Computer or Another Computer, hit OK and the window goes away and the server doesn't get added but there's also no error.  I checked in the Windows event logs and I'm not finding any errors there.

    I tried running Get-VM from an admin PowerShell and it's just sitting there not executing or erroring.  I've left the Window open for about 15 minutes now.

    I'm going to try rebooting the server after hours to see if that resolves the issues but I fear for the state of the VMs since I can't actually connect to the server to ensure they save or shutdown before doing so.  I can still RDP to each VM so I'll likely do that and shut each one down that way.

    Monday, February 18, 2019 7:22 PM
  • We are facing the same issue...

    DevT-MCT

    Wednesday, February 20, 2019 11:00 AM
  • Hi,

    Sorry for my delay. 

    Would you mind reviewing the current situation about this issue please? So that we can find more clue.

    For now, you encountered the issue that can't connect to local computer on HyperV Manager. Can we get any error message during the operation? If possible, post a screenshot to me, please remember to cover up your privacy information.

    Please also check if there's any error logs in the event viewer regarding Hyper-V. We can first clear logs and reconnect to the host, then recollect the event logs.

    https://blogs.technet.microsoft.com/virtualization/2018/01/23/looking-at-the-hyper-v-event-log-january-2018-edition/  

    Besides, now can we RDP the VMs on this host? We can type "vmconnect" in the RUN.exe

    Hope above information help you. If you have any question or concern, please feel free to let me know.

    Best regards,

    Michael


    Please remember to mark the replies as an answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com

    Thursday, February 21, 2019 8:29 AM
    Moderator
  • Hi,

    Just want to confirm the current situations.

    Please feel free to let me know if you need further assistance.

    Best regards,

    Michael


    Please remember to mark the replies as an answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com

    Monday, February 25, 2019 1:34 PM
    Moderator
  • I had to reboot my servers so we could actually use them but if it happens again I'll delve deeper.  I had gone through the normal App & Sys logs and found nothing relevant, I also went into the Hyper-V specific logs and nothing there either.  I did not try vmconnect but I will next time.
    Monday, February 25, 2019 1:45 PM
  • Hi,

    How are things going on? Any update?

    Best regards,

    Michael


    Please remember to mark the replies as an answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com

    Thursday, February 28, 2019 9:40 AM
    Moderator
  • Hi,

    Use vmconnect can RDP connect to a specific VM?

    Best regards,

    Michael


    Please remember to mark the replies as an answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com

    Thursday, February 28, 2019 9:41 AM
    Moderator
  • Hi,

    Use vmconnect can RDP connect to a specific VM?

    Best regards,

    Michael


    Please remember to mark the replies as an answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com

    I've had this issue happen again.  I've loaded VMConnect on two systems, one that's working and this one that is not.  On the one that's working it iterates the list of virtual machines, on this one it just says <Loading...> and never stops.  I tried typing the name of a VM I know is on the system and hit OK but nothing happens (the window just stays there and I can keep clicking OK with no results).  The Hyper-V Manager once again has removed the local server, I can't connect remotely, the Get-VM Powershell command just sits for ever.

    • Windows 2019 (all updates except the new KB4482887 update)
    • Dell PowerEdge R540 (BIOS 1.6.11 but 1.7.0 is newest)
    • Intel dual port X520 10gbe adapters (driver 23.5.1 but 23.5.2 is newest)
    • NIC teaming of X520 in LACP Teaming Mode and Hyper-V Port Load Balancing
    • Hyper-V External Virtual Switch of that Team with SR-IOV enabled
    • Lines support and have enabled RSS and VMQ
    • VMs set to Save State on shutdown

    I have a better understanding now of what might have caused this but I'm not sure why.  I had 6 VMs running on this host system and went to start a 7th for testing and I experienced an issue I had happen before the last time this problem happened.  The VM would show starting with 10% progress and would sit there for about 5 minutes and eventually error out and never start.  Once that happened I could no longer view the settings of any VM on that system using the local Hyper-V Manager or remote one.  I could still connect to the system and attempt to start/stop/save/connect to VMs.  However after a period of time that escalated from not being able to view the settings of VMs to the entire Hyper-V Manager not being able to connect putting me again in this state.  I copied out all of the errors that happened when I was trying to start the 7th VM that then resulted in this snowball starting to roll.

    The first errors to appear are the ones about "Cleaning up stale reference point(s)" failing which happened several times over the 5 minute span of it trying to start.  Then come the errors related to the virtual network which are what is shown to me in the Hyper-V Manager once the 5 minute timeout hits and are also in the logs.  As soon as I click ok on those errors I can no longer view or edit the settings of VMs and can no longer start anymore VMs.  They all have the exact same issue happen.

    Log Name:      Microsoft-Windows-Hyper-V-VMMS-Admin
    Source:        Microsoft-Windows-Hyper-V-VMMS
    Date:          3/1/2019 3:29:49 PM
    Date:          3/1/2019 3:31:04 PM
    Date:          3/1/2019 3:32:19 PM
    Date:          3/1/2019 3:33:34 PM
    Date:          3/1/2019 3:34:49 PM
    Date:          3/1/2019 3:36:04 PM
    Event ID:      19060
    Level:         Error
    User:          SYSTEM
    Description:
    'DIST-2019-SQL-STIG' failed to perform the 'Cleaning up stale reference point(s)' operation. The virtual machine is currently performing the following operation: 'Starting'. (Virtual machine ID 20F66199-7EE9-406A-A06A-6D1F4D29A9AE)
    
    
    Log Name:      Microsoft-Windows-Hyper-V-Worker-Admin
    Source:        Microsoft-Windows-Hyper-V-SynthNic
    Date:          3/1/2019 3:31:11 PM
    Event ID:      12670
    Level:         Error
    User:          NT VIRTUAL MACHINE\20F66199-7EE9-406A-A06A-6D1F4D29A9AE
    Description:
    'DIST-2019-SQL-STIG' failed to allocate resources while connecting to a virtual network: The wait operation timed out. (0x80070102) (Virtual Machine ID 20F66199-7EE9-406A-A06A-6D1F4D29A9AE). The Ethernet switch may not exist.
    
    
    Log Name:      Microsoft-Windows-Hyper-V-Worker-Admin
    Source:        Microsoft-Windows-Hyper-V-SynthNic
    Date:          3/1/2019 3:36:13 PM
    Event ID:      12670
    Level:         Error
    User:          NT VIRTUAL MACHINE\20F66199-7EE9-406A-A06A-6D1F4D29A9AE
    Description:
    'DIST-2019-SQL-STIG' failed to allocate resources while connecting to a virtual network: This operation returned because the timeout period expired. (0x800705B4) (Virtual Machine ID 20F66199-7EE9-406A-A06A-6D1F4D29A9AE). The Ethernet switch may not exist.
    
    
    Log Name:      Microsoft-Windows-Hyper-V-Worker-Admin
    Source:        Microsoft-Windows-Hyper-V-Worker
    Date:          3/1/2019 3:36:13 PM
    Event ID:      12006
    Level:         Error
    User:          NT VIRTUAL MACHINE\20F66199-7EE9-406A-A06A-6D1F4D29A9AE
    Description:
    'DIST-2019-SQL-STIG' Synthetic Ethernet Port: Failed to finish reserving resources with Error 'This operation returned because the timeout period expired.' (0x800705B4). (Virtual machine ID 20F66199-7EE9-406A-A06A-6D1F4D29A9AE)
    
    
    Log Name:      Microsoft-Windows-Hyper-V-Worker-Admin
    Source:        Microsoft-Windows-Hyper-V-Worker
    Date:          3/1/2019 3:36:13 PM
    Event ID:      12030
    Level:         Error
    User:          NT VIRTUAL MACHINE\20F66199-7EE9-406A-A06A-6D1F4D29A9AE
    Description:
    'DIST-2019-SQL-STIG' failed to start. (Virtual machine ID 20F66199-7EE9-406A-A06A-6D1F4D29A9AE)
    
    

    Monday, March 4, 2019 2:38 PM
  • Hi,

    Has there been a resolution found for this?  I'm having the same issue.  I've 2 x 4 node 2019 HyperV failover clusters all built at the same time.  In SCVMM I see a message saying "Host Not Responding", all of the VM's are still functioning and I can RDP to them.

    On the host that isn't responding I can't connect to the local hyper v management console.  The same thing has happened across the 2 clusters (on 2 different sites) and across each of the nodes at various times, I have to reboot the server to gain access again.  When I reboot the host it hangs on the reboot while trying to stop the hyper v management service, I then have to power cycle it through the iDRAC.  When the server boots again everything is fine.

    When I run VMconnect I get the same as gintal in that it just sits there saying "loading...."

    Tuesday, June 25, 2019 9:30 AM
  • Hello.
    We have the same trouble.
    A cluster of 8 nodes Windows Server 2016. Validation report shows "ok". Some time (1-2 weeks) everything works well. For unknown reasons, the nodes suddenly "fall out" of the cluster. It is impossible to connect to such a node via "Hyper-V Manager". The virtual machines on this host are not managed, but continue to work.  We wrote in technical support of Mesrosoft. But technical support has not yet helped us find out the reasons for this behavior of the nodes.

    We can correct the situation only by rebooting the node.

    Tuesday, June 25, 2019 12:18 PM
  • We were seeing this on our Dell PowerEdge R540 servers with Windows 2019.  The only way to get it to stop happening was to decrease load by running fewer VM.  It seemed to always kick off when some unknown "next" VM would turn on.  I haven't had it happen for 2 months since I started running fewer VMs on that hardware but then recently some co-workers in a different area started having this happen as well.  Dell PowerEdge R530 systems that were previously Windows 2016 redone as Windows 2019 and the exact same issues listed by others started to happen.

    Systems Administrator Senior - University of Central Florida

    Wednesday, June 26, 2019 3:01 PM
  • We were seeing this on our Dell PowerEdge R540 servers with Windows 2019.  The only way to get it to stop happening was to decrease load by running fewer VM.  It seemed to always kick off when some unknown "next" VM would turn on.  I haven't had it happen for 2 months since I started running fewer VMs on that hardware but then recently some co-workers in a different area started having this happen as well.  Dell PowerEdge R530 systems that were previously Windows 2016 redone as Windows 2019 and the exact same issues listed by others started to happen.

    Systems Administrator Senior - University of Central Florida


    We're using Dell R740 servers but there isn't a large load on them yet.  Only 3 or 4 VM's on a node and it can happen.
    Wednesday, June 26, 2019 4:26 PM
  • So we continue to have these issues with Windows 2019 running all available updates as of today (10/15/2019).  We've narrowed it down to something related to Generation 2 VMs that have Secure Boot + Virtual TPM enabled on them (which apparently makes them a Shielded VM even if you don't check the Shielded box below).  If one of these systems is introduced to a Host system and is then rebooted there's a very high chance the VM will show "Stopping" indefinitely and will never show as stopped or rebooted in Hyper-V.  The only solution is to reboot the Host system taking offline all VMs.  If the system is not rebooted soon enough then the original inability to connect to the system will happen after a couple of days.  We found this happens on our newer Dell PowerEdge R540 and R730 but our older Dell PowerEdge R715 servers do not have this issue.  The new systems are TPM 2.0 and the older systems that work are TPM 1.4.

    We thought this had gone away because we had only spun up Generation 2 VMs that have Secure Boot + Virtual TPM on the older R715 hardware until last week when I moved one to the R540 hardware.  That system had been stable for 8 months but as soon as I added that system and rebooted it the issue started happening again.  I formatted a R540 system and did a completely fresh Windows 2019 installation and then added back ONLY that same Generation 2 VMs that have Secure Boot + Virtual TPM VM and the first time I rebooted it the system went bad.


    Systems Administrator Senior - University of Central Florida

    Tuesday, October 15, 2019 8:12 PM
  • MS patch KB4520062 seems to have resolved this issue for us finally 8 months later.  It also resolves other unlisted issues that can be read about here;

    https://social.technet.microsoft.com/Forums/en-US/e8c45a15-0b9a-4b2c-ae2a-c546eadbcf41/hyperv-2019-guest-vmamp39s-shutdown-unexpectedly?forum=winserverhyperv


    Systems Administrator Senior - University of Central Florida

    • Proposed as answer by brentil Monday, October 28, 2019 5:23 PM
    Monday, October 28, 2019 5:23 PM