Asked by:
Hyper-V Live Migration - Windows Server 2016

-
Good day
Has anyone experienced an issue with regard to live migration of virtual servers running configuration version 8.0 on a Windows Server 2016 S2D Hyper-V cluster?
I have built a two-node Hyper-V fail-over cluster using Windows server 2016 and storage spaces direct and I have imported some virtual server from a different Hyper-V (2012 R2) cluster which are running Windows Server 2012 R2 and configuration version 5.0 without any issues. I have configured new virtual servers which are running Windows Server 2016 and configuration version 8.0 without any issues.
I have tested live migration with all servers using Hyper-V Manager and then subsequently within Fail-over Cluster Manager with the following results:
- Virtual Servers running configuration version 5.0 live migrate successfully every time
- Virtual Servers running configuration version 8.0 do not live migrate successfully
I receive the following errors when trying to live migrate a virtual server running configuration version 8.0:
- Event ID: 21111 - Live migration of 'Virtual Machine ***' failed - Not too much information regarding this one!
- Event ID: 12660 - Cannot open handle to Hyper-V storage provider - The MS article refers to Windows Server 2008 and states that I should completely re-image the server and install the hyper-v role... this seems a bit excessive to me seeing that live migration works perfectly on virtual servers running configuration version 5.0 of the integration tools!
Some information regarding the build:
- Two-Node Hyper-V Failover Cluster running Windows Server 2016, built using Dell R630 Servers running the latest firmware and BIOS updates
- Switch Embedded Teaming using 4 x 1Gig nics to create a 4Gig pipe for vm traffic
- Storage Spaces Direct using SSD for cache and SATA for capacity
- A directly connected 10Gig storage network (no switch or infini-band yet… this will be installed in a day or two) using Jumbo Frame
- Two Clustered Shared Volumes with vm’s distributed across both of them
- A third CSV for VEEAM B & R (busy testing it one this cluster)
I would appreciate any advice here, thanks!
Question
All replies
-
Hi,
just curious: what vendor and type are your 4x 1Gig NICs ?
And to the case: what have you tried to livemigrate? Just VM load? Or storage as well?
And new Virtual machine ver. 8.0 with no Operating system installed is also failing to livemigrate?
Radek
-
I am using the following Network Cards for VM traffic:
Broadcom 5720 QP NetExtreme Gigabit Ethernet (driver name - b57nd60a)
I have tried the following:
- Live Migration of VM Resources - Failed
- Live Migration of VM Resources with no HDD attached - Failed
- Live Storage Migration of VM - Failed
- Quick Migration of VM while powered on - Successful
- Quick Migration of VM while powered off - Successful
- Storage Migration of VM while powered off - Successful
-
And your VM ver. 5.0 and 8.0 are using same VM network for live migrations?
Looks like that some advanced functionality of VM 5.0+ is not working correctly.
Can you take a look on this article http://www.tech-coffee.net/switch-embedded-teaming/ if your configuration meets the requirements for SET?
And Broadcom 5720 QP NetExtreme Gigabit Ethernet is NON-RDMA capable, right? Check with
Get-SmbClientNetworkInterface
Radek
- Proposed as answer by Leo HanModerator Thursday, March 30, 2017 2:16 AM
- Unproposed as answer by Andrew Warburton Tuesday, April 4, 2017 5:10 PM
-
Yes, same network.
I believe that the 5720 QP is NON-RDMA capable, although I am able to enable it in the configuration setting of the NIC.
---> Get-SmbClientNetworkInterface
Interface Index RSS Capable RDMA Capable Speed
--------------- ----------- ------------ -----
13 True False 10 Gbps
8 True False 10 Gbps
26 False False 10 Gbps
30 True False 4 Gbps
4 True False 4 Gbps
16 True False 4 Gbps
14 True False 4 Gbps
3 True False 4 Gbps----> get-netadaptervmq
Name InterfaceDescription Enabled BaseVmqProcessor MaxProcessors NumberOfReceive
Queues
---- -------------------- ------- ---------------- ------------- ---------------
***2 QLogic BCM57810 10 Gigabit ...#41 False 0:6 2 0
***1 QLogic BCM57810 10 Gigabit ...#40 False 0:2 2 0
***4-0 Broadcom NetXtreme Gigabit E...#4 False 0:0 16 8
***3-0 Broadcom NetXtreme Gigabit Eth... False 0:0 16 8
***2-0 Broadcom NetXtreme Gigabit E...#3 False 0:0 16 8
***1-0 Broadcom NetXtreme Gigabit E...#2 False 0:0 16 8---> Get-NetAdapterRdma
Name InterfaceDescription Enabled
---- -------------------- -------
vEthernet (***) Hyper-V Virtual Ethernet Adapter #5 False
vEthernet (***) Hyper-V Virtual Ethernet Adapter #4 False
vEthernet (***) Hyper-V Virtual Ethernet Adapter #3 False
vEthernet (***) Hyper-V Virtual Ethernet Adapter #2 False
vEthernet (***) Hyper-V Virtual Ethernet Adapter False -
Hi,
Are there any updates on the issue?
You could mark the reply as answer if it is helpful.
Best Regards,
LeoPlease remember to mark the replies as answers if they help.
If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com. -
Hi Leo
No, unfortunately nothing has helped so far. I have upgraded the network cards to the Mellanox ConnectX-3 Pro's and RDMA is configured, see below.
---> Get-NetAdapterRdma
Name InterfaceDescription Enabled
---- -------------------- -------
vEthernet (***) Hyper-V Virtual Ethernet Adapter #5 False
vEthernet (***) Hyper-V Virtual Ethernet Adapter #4 False
vEthernet (***) Hyper-V Virtual Ethernet Adapter #3 False
vEthernet (***) Hyper-V Virtual Ethernet Adapter #2 False
vEthernet (***) Hyper-V Virtual Ethernet Adapter False
*** 2 Mellanox ConnectX-3 Pro Ethernet Adapter True
***-201 Mellanox ConnectX-3 Pro Ethernet Adap... True
*** 2 Mellanox ConnectX-3 Pro Ethernet Adap... True
***-202 Mellanox ConnectX-3 Pro Ethernet Adap... True---> Get-SmbClientNetworkInterface
Interface Index RSS Capable RDMA Capable Speed
--------------- ----------- ------------ -----
34 False False 100 Kbps
28 False False 100 Kbps
25 False False 100 Kbps
23 False False 100 Kbps
21 False False 100 Kbps
6 False False 100 Kbps
31 False False 100 Kbps
27 False False 100 Kbps
26 False False 100 Kbps
4 False False 100 Kbps
2 True True 40 Gbps
20 True True 40 Gbps
30 False False 10 Gbps
35 True False 4 Gbps
5 True False 4 Gbps
18 True False 4 Gbps
16 True False 4 Gbps
3 True False 4 Gbps---> Get-NetAdapterRSS
InterfaceDescription : Mellanox ConnectX-3 Pro Ethernet Adapter #3
Enabled : True
NumberOfReceiveQueues : 8
Profile : NUMAStatic
BaseProcessor: [Group:Number] : 0:2
MaxProcessor: [Group:Number] : 0:4
MaxProcessors : 2
RssProcessorArray: [Group:Number/NUMA Distance] : 0:2/32767 0:4/32767
IndirectionTable: [Group:Number] : 0:2 0:4 0:2 0:4 0:2 0:4 0:2 0:4
0:2 0:4 0:2 0:4 0:2 0:4 0:2 0:4
0:2 0:4 0:2 0:4 0:2 0:4 0:2 0:4
0:2 0:4 0:2 0:4 0:2 0:4 0:2 0:4
0:2 0:4 0:2 0:4 0:2 0:4 0:2 0:4
0:2 0:4 0:2 0:4 0:2 0:4 0:2 0:4
0:2 0:4 0:2 0:4 0:2 0:4 0:2 0:4
0:2 0:4 0:2 0:4 0:2 0:4 0:2 0:4
0:2 0:4 0:2 0:4 0:2 0:4 0:2 0:4
0:2 0:4 0:2 0:4 0:2 0:4 0:2 0:4
0:2 0:4 0:2 0:4 0:2 0:4 0:2 0:4
0:2 0:4 0:2 0:4 0:2 0:4 0:2 0:4
0:2 0:4 0:2 0:4 0:2 0:4 0:2 0:4
0:2 0:4 0:2 0:4 0:2 0:4 0:2 0:4
0:2 0:4 0:2 0:4 0:2 0:4 0:2 0:4
0:2 0:4 0:2 0:4 0:2 0:4 0:2 0:4InterfaceDescription : Mellanox ConnectX-3 Pro Ethernet Adapter #2
Enabled : True
NumberOfReceiveQueues : 8
Profile : NUMAStatic
BaseProcessor: [Group:Number] : 0:6
MaxProcessor: [Group:Number] : 0:8
MaxProcessors : 2
RssProcessorArray: [Group:Number/NUMA Distance] : 0:8/0 0:6/32767
IndirectionTable: [Group:Number] : 0:8 0:6 0:8 0:6 0:8 0:6 0:8 0:6
0:8 0:6 0:8 0:6 0:8 0:6 0:8 0:6
0:8 0:6 0:8 0:6 0:8 0:6 0:8 0:6
0:8 0:6 0:8 0:6 0:8 0:6 0:8 0:6
0:8 0:6 0:8 0:6 0:8 0:6 0:8 0:6
0:8 0:6 0:8 0:6 0:8 0:6 0:8 0:6
0:8 0:6 0:8 0:6 0:8 0:6 0:8 0:6
0:8 0:6 0:8 0:6 0:8 0:6 0:8 0:6
0:8 0:6 0:8 0:6 0:8 0:6 0:8 0:6
0:8 0:6 0:8 0:6 0:8 0:6 0:8 0:6
0:8 0:6 0:8 0:6 0:8 0:6 0:8 0:6
0:8 0:6 0:8 0:6 0:8 0:6 0:8 0:6
0:8 0:6 0:8 0:6 0:8 0:6 0:8 0:6
0:8 0:6 0:8 0:6 0:8 0:6 0:8 0:6
0:8 0:6 0:8 0:6 0:8 0:6 0:8 0:6
0:8 0:6 0:8 0:6 0:8 0:6 0:8 0:6- Edited by Andrew Warburton Tuesday, April 4, 2017 5:12 PM
-
I have a two-node Hyper-V fail-over cluster using Windows server Datacenter 2016 and storage spaces direct and I have imported some virtual server from a different Hyper-V (2012) servers which are running Windows Server 2012/2012R2, I have configured new virtual servers which are running Windows Server 2016 and configuration version 8.0 for ALL my Virtual Machines. No any problems with cluster, S2D, Live migration, etc.
But on both servers Hyper-V System log full of "Cannot open handle to Hyper-V storage provider." EventID: 12660.
Dell PE 740xd. Latest firmware and OS updates.