Monday, May 17, 2010 1:43 PM
I am running Hyper-V R2 on two HP bl460 G6 Blade Servers which are part of a CSV Cluster. I installed two new 2003 x32 SP2 VMs, one per server, with intention of creating a simple Web NLB Cluster (using IIS). Here is what I did:
- I added two NICs per each VM (although they are part of the same subnet and using the same Virtual Network Adapter I created via the Hyper-V Virtual Network Manager.
- I did not add any gateway or DNS settings on the NLB NICs.
- I pushed the "LAN NIC" to be before the NLB NIC in the Advanced Settings property of the Network Connections MMC.
- I manually configured the NLB MAC Address on the VM Configuration and checked the box to allow spoofing.
What I started to find was that:
- All hosts were able to ping the NLB IP, as well as each individual server IP.
- All hosts could connect to port 80 (http) and view web server content using the direct IP of each host.
- Some hosts could not connect to port 80 on the NLB IP address. I tried telnet NLBIP 80 and got a timeout.
- This behavior was inconsistent, with hosts experiencing or not, different subnets, different OSs, etc.
Then, I opened up a sniffer (WireShark) on my PC (which wasn't working) and on the VM and on the Hyper-V physical server:
- I saw packets leaving my PC on destination to the NLB IP for port 80 (http)
- I saw packets arriving at my Hyper-V server (physical), with source IP my PC and destination IP the NLB IP.
- Nothing coming in seen on the VM.
- I did however see ping packets come through to the VM and back to my PC.
I tried a whole bunch of configuration games, different settings, allowing all ports on NLB (not just TCP 80), etc. I don't understand why ping packets are passing through but not TCP 80. I also tried other ports like TCP 3389 (RDP) and also got nowhere. I seem to think there is a problem somewhere with Hyper-V or my physical server.
Some configs of my physical server:
- I am using HP Teaming in order to create a bonded NIC of high throughput for a Virtual NIC used by Hyper-V
- I have updated my server with the latest firmware and drivers as of the beginning of May 2010
- As far as I know, I have installed all of the latest applicable bug fixes/patches that relate to Hyper-V and 2008 R2.
Any idea what is going on here and what I can do to get it working? Is there a way to "snif" the Virtual Switch to see if it is passing packets on to the VM or not?
Monday, May 17, 2010 6:17 PMhi Reuvy, is your NLB in unicast or multicast mode?
Thursday, May 20, 2010 5:10 AMUnicast.
Thursday, May 20, 2010 1:36 PM
So, I think we have figured out the problem (if only a solution as well!).
We are using Teaming on our HP Blades, particularly the Teaming of type "Transmit Load Balancing with Fault Tolerance (TLB)". That will give me the combined spead of both my NICs, instead of just "Network Fault Tolerance (only)" which will only give me the speed of one NIC at a time.
I did a search for known issues with physical server adapter teaming and NLB and found a few links:
- A little old, but: Using teaming adapters with network load balancing may cause network problems
- The quick summary of this post is, "Don't use NLB on teamed NICs."
- This is also applicable regarding Exchange 2010 (where I have a similar situation as well) and was asked here (albeit with no answer): Exchange 2010 CAS NLB and NIC Teaming
So I moved my VMs to a different Hyper-V server which was not using any hardware NIC teaming. I want to wait a few days to be sure that this fixes things before I claim this to be the solution.
- Marked As Answer by Mervyn Zhang Friday, May 21, 2010 10:14 AM