19. května 2010 7:30
I'm running a two node failover cluster with SCVMM, SCDPM, ... in VMs.
Sometimes there are issues to connect from VMs to one or both cluster nodes via admin network share (e.g. \\clusternode1\c$ or \\clusternode1\admin$ ...). These connections are neccessary for the VMM agent or the DPM agent - so, if there are troubles I have troubles with DPM, VMM, ... too.
It's a very strange behaviour, because after live migration to the other node, the connection is up again for a while. But sometimes later there are the same issues again. Live migrate again, connection is up. Also a reboot of the node helps for a while ...
I can connect from node to node, the firewalls on VM and nodes are completely deactivated for testing and pinging etc. running without issues. I'm logged in with domain admin account. There are no event log entries. Cluster validation is 100% green.
Could this be a IPv6 problem? Or a bug? It's very strange and painful ...
21. května 2010 15:10I opened a MS support case ... we'll see ...
7. června 2010 13:56
In cooperation with MS support we figured out that it was a timing issue with the Intel NICs. We disabled the "Allow management OS to share this network adapter" and used a dedicated NIC only for host management - no more issues since that.
So, it's really very recommended to use a dedicated NIC for node management and not to share it with Hyper-V VM traffic!
- Označen jako odpověď Elden ChristensenMicrosoft Employee, Owner 8. června 2010 4:25