martes, 01 de mayo de 2012 20:33Here is my scenario, I have 57 hosts, with 300 VM's running across 8 datacenters. All the datacenters have multiple different VLANs. We just setup VMM 2012 RTM, and migrated all the hosts from VMM 2008 R2.
My issue is now when I want to change the VLAN ID on a virtual machine it is now a drop down box populated by Logical Networks in Networking. The old way I could just type the VLAN tag into a text field.
At first glance this wouldn't seem too much an issue, but I have ~ 50+ VLan's per site, or about 400 VLan's district wide. I can script all of them in to logical networks, however here is where the issue gets compounded. Not all servers in each site have the same network config. We are migrating towards a single trunked interface that contains all VLAN's, but are only 90% complete.
Servers #1, 2, 3
Team-00 (trunked with all VLANs in a NIC team)
Team-01 VLAN 1 (virtual NIC for VLAN)
Team-01 VLAN 2 (virtual NIC for VLAN)
Team-01 VLAN 3 (virtual NIC for VLAN)
So if I have an IP pool assigned to Team-00, I cannot assign it to Team-01 VLAN, even though I could do this with VMM 2008. The servers are not a cluster, and are functioning properly. Now that I have 2012 installed, I have to pull up Hyper-V manager and make the change from there. This is a major pain, since we are in the process of re-doing our networks, and we move about 20-30 machines everyday.
Here is another example where I cannot install overlapping IP Pools:
Cluster #1 (HBA SAN)
Servers #1 - 12
Team-00 (trunked with all vlans)
Cluster #2 (iSCSI SAN)
Team-00 (trunked with all vlans)
iSCSI-01 (virtual NIC#1 for iSCSI NIC to VM)
iSCSI-02 (virtual NIC#2 for iSCSI NIC to VM)
When I have a VM in cluster #1 that needs iSCSI acces, I just add another NIC, bind to Team-00, and Tag it to the iSCSI VLAN. In Cluster #2, I add two other NICs and bind it to the iSCSI-01 and iSCSI-02. That part works fine, the issue is I cannot assign an IP pool with overlapping, even though it is possible to have the same VLAN in Team-00, iSCSI-01, and iSCSI-02. With VMM 2008, I just set the NIC for trunked (required VLAN promiscuous mode on the host), and allowed all VLAN's on that Team. It was a great system that has allowed a ton of network flexibility. The second network example has worked well where I need to a Virtual machine full performance to an iSCSI target. That way I can have a 6 TB file server accessing iSCSI, where the thought of a 6TB VHD frankly scares me.
There may not be a fix, if so we are going to roll back to VMM 2008 until the project is complete. I understand the reasoning for Logical Networking, but it is going to cause us major pains going forward.
Todas las respuestas
domingo, 03 de junio de 2012 5:34
Just a quick update, I still haven't found a solution, and unfortunately I found another issue.
The first new issue is hopping VLAN tags when deploying a new VM. When you create a new VM you can specify the VLAN tag under the network portion in the hardware. Well I have deployed 6 VM's and specified a VLAN of 402 (our server/test network), but after creating the VM it moves to VLAN 250 (first VLAN in the list). I have verified from the VM's (using ipconfig) that it was indeed on the 250 VLAN. I can fix the issue by moving the VM to a different VLAN, but it makes it impossible to rely on scripting to create a new VM.
I am hoping that the first hot fix/service pack will fix this issue. I really like VMM 2008 R2, and I was hoping that 2012 would better. So far I am running into more issues that will make me keep 2012 only in our test environments.
Another issue that popped up was the admin console locks up and will restart on a random basis.
If I find any fixes I will update my post, still haven't had one reply to my original post. Here is hoping.