none
SCVMM 2012 - Maintenance Mode - Strange behavior

    Question

  • Hi all

    I recently upgraded our SCVMM 2088 R2 to SCVMM 2012 and now during a patchday maintenance process i see a strange behavior. At the moment i dont use the integrated patch orchestrator in SCVMM 2012. So i do it manually.

    I have 3 HyperV Windows 2008 R2 Servers in a HA Cluster with a SAN CSV.

    process in SCVMM 2008 R2:

    - Put Server 1 in maintenance mode
    - SCVMM evacuate all VM's to the other 2 Hosts
    - Patch Server 1 and restart and disable maintenance mode
    - Put Server 2 in maintenance mode
    - SCVMM evacuate all VM's to the other 2 Hosts
    - Patch Server 2 and restart and disable maintenance mode
    - Put Server 3 in maintenance mode
    - SCVMM evacuate all VM's to the other 2 Hosts
    - Patch Server 3 and restart and disable maintenance mode

    but now in SCVMM 2012:

    process in SCVMM 2012:

    - Put Server 1 in maintenance mode
    - SCVMM evacuate all VM's to Server 2
    - Patch Server 1 and restart and disable maintenance mode
    - Put Server 2 in maintenance mode
    - SCVMM evacuate all VM's to Server 3
    - Patch Server 2 and restart and disable maintenance mode
    - Put Server 3 in maintenance mode
    - SCVMM evacuate all VM's to Server 1
    - Patch Server 3 and restart and disable maintenance mode

    Thanks to not using all 2 available servers the hole process goes longer as needed. How can i prevent this behavior?

    thanks

    JBAB

    Thursday, May 10, 2012 8:15 AM

All replies

  • Hi JBAB,

    I would advise you to configure all your VMs running on Server1 to be re-moved automatically on it once it has started. You have to do that in Failover Cluster Manager. In addition, you can specify your preferred owners. 

    You can repeat these operations on your 2 others hosts.

    Cheers,


    David LACHARI
    MVP Virtual Machine - VCP et VTSP vSphere 4.1
    Blog DansTonCloud
    Auteur du livre Hyper-v v2 sous Windows Server 2008 R2

    Thursday, May 10, 2012 9:43 AM
  • Hi David

    Thanks for the tipp, didn't knowing that.

    But thats not my problem. In the end of the patch process it is absolutely clear and allowed to have one server empty. My problem is that SCVMM 2012 do not evacuate the VM's in a optimize way. This mean SCVMM 2012 choose just one host and evacuate all VM's to one host and not to 2 hosts as it was in SCVMM 2008 R2.

    What is the behavior of the maintenance task in your environment? Does SCVMM 2012 evacuate the VM's over all hosts?

    thanks

    Thursday, May 10, 2012 10:33 AM
  • For Live Migration, it is recommanded to have 1 Gb minimum for this network. Dis you implement this network configuration ?

    Do virtual machines are configured with Dynamic Memory ? When you live migrate VMs with DM, it can be longer than in fixed memory ... 

    Do virtual machines have ISO plugged ? 


    David LACHARI
    MVP Virtual Machine - VCP et VTSP vSphere 4.1
    Blog DansTonCloud
    Auteur du livre Hyper-v v2 sous Windows Server 2008 R2

    Thursday, May 10, 2012 3:07 PM
  • Yes i have 1GB for LiveMigration. Some VM's have Dynamic Memory some not. No ISO plugged.

    But thats not my problem :)

    That's my problem:

    So SCVMM 2008 R2 made it in a better way of evacuate the VM's from Server 1. Because SCVMM 2012 puts every VM that was on Server 1 to Server 2. Instead of load balance the cluster already during the evacuation process.


    • Edited by JBAB Friday, May 11, 2012 6:21 AM
    Friday, May 11, 2012 6:21 AM
  • Hi,

    This is even more annoying when you use the integrated Update Orchestrator. It moves the VMs to exactly the machine it wants to update next. So in the end all VMs end up on one host and you have to distribute them manually.

    Did just encounter this behavior on a 3 node cluster, I'm a bit afraid to test it on our 7 node cluster .

    Thursday, May 17, 2012 10:25 AM
  • This is a known issue in SCVMM 2012. The only current workaround is to preemptively manually migrate the VMs to different hosts, I'm afraid.  I'll update this thread if a different fix becomes available.

    Monday, May 21, 2012 5:32 PM
  • Good to know guys. Yesterday I was doing some host maintenance for our Hyper-V cluster that is managed by VMM 2012 and noticed this exact behaviour. Had to manually go and distribute the VMs evenly via Live Migration.

    Friday, May 25, 2012 7:40 PM
  • Hi all,

    are there any news on this strange behavior?

    Thanks!

    Tuesday, June 12, 2012 12:23 PM
  • It is a known issue. It should be fixed in SP1.

    Tuesday, June 12, 2012 4:36 PM
  • Any idea if this is fixed in SCVMM 2012 SP1? From what I have seen I don't believe it is.

    Many thanks,

    Marcus

    Thursday, December 27, 2012 5:47 PM
  • I have made a powershell script that will spread the VM´s to different hosts before going into maint mode..

    http://vniklas.djungeln.se/2012/11/25/scvmm-2012-evacuate-vmhost-and-maintenance-script/

    It could be developed a bit more but it is a start

    //Niklas

    Thursday, December 27, 2012 6:33 PM
  • Any idea if this is fixed in SCVMM 2012 SP1? From what I have seen I don't believe it is.

    Many thanks,

    Marcus


    This has been marked as fixed. I haven't personally verified it, however. Do you have a case you can describe where you don't see the evacuated VMs being load balanced?
    Friday, December 28, 2012 7:51 AM
  • I believe so.

    I've put SP1 on a three node cluster. I put one node into maintenance mode. All VMs seemed to go to one node, rather than load balancing. The node they were moved to then ran out of RAM, and the host OS gradually stopped responding (over a number of minutes, but couldn't stop it happening), all VMs stopped/went offline. RAM was then available and then the VMs came back up.

    My guess was SCVMM looks at rating prior to move and moves all VMs to that node and doesn't take account of RAM that is used once VMs are moved to that node.

    I hadn't notice this happen before (pre SP1), but then found this thread...

    Friday, December 28, 2012 9:26 AM
  • I have just re-validated the SP1 maintenance mode fix. In the clusters on which I tried it, the evacuated VMs were load balanced evenly across all target nodes.  That said, this isn't the only case where I've heard about incorrect behavior, so getting to the root cause would be very valuable.

    If you are having this problem, the following information would be very helpful:

    • Do you have dynamic memory VMs being evacuated? What does their assigned memory MB show as in the VMM console?
    • If you use placement to migrate them one-by-one, which hosts appear at the top of the list? Does that change after you have migrated a few?
    • Are there errors or warnings against some hosts in the above placement process?

    Thanks, Hilton

    Thursday, January 03, 2013 8:40 PM