locked
SCVMM 2012 SP1 (beta) maintenance mode puts non-clustered VMs on W2K12 hosts in saved state RRS feed

  • Question

  • first - apologies if this has been dealt with elsewhere... if it has, i was not able to find that thread. ;-)

    i am playing with SCVMM 2012 SP1 (beta) and have two W2K12 server hosts in a cluster as my playing field. all hosts were updated today using windows update. (and yes, i ran in to the "weak event was created"-issue ;-)

    when i enable maintenance mode through Failover Cluster Manager, it behaves exactly as I expected - it evacuates all clustered VMs (Live!) from the node and leaves the non-clustered ones as they are. same with CAU - works as expected.

    Maintenance mode on SCVMM 2012 SP1 (beta) however, not only evacuates clustered VMs, it also puts non-clustered VMs in to saved state. while this may be acceptable for some VMs, it certainly isn't for virtual domain controllers. although i find that behavior generally strange, since it could unexpectedly disrupt other services, when dependent VMs drop off the network for no apparent reason....

    1. how to i tell SCVMM 2012 that non-clustered VMs are not to be saved in maintenance mode? specifically virtual domain controllers of course...

    [edit]
    2. when i end maintenance mode, saved VMs are not automatically restarted - nor are evacuated VMs migrated back to their original host.

    3. if there is a reason for this (default?) behavior, what is it?

    thanks in advance,

    dom


    Saturday, January 12, 2013 1:01 PM

Answers

  • Thanks for the feedback Dominique.

    Yes, SCVMM does not have a way to automatically manage migrations of non-HA VMs prior to maintenance mode. Your suggestions are perfectly valid, and your feedback is in line with our own thinking of possible systems to handle this in future.

    Right now the workarounds are to make those VMs HA (in which case cluster migration will evacuate the host before maintenance mode), or to evacuate them manually or via script.

    Thanks

    Hilton

    • Marked as answer by Dominique Cote Tuesday, January 15, 2013 6:28 PM
    Tuesday, January 15, 2013 5:14 PM

All replies

  • first of all you should install the RTM version of scvmm 2012 sp1 :-), maybe it is fixed there?

    If it works with failover cluster manager and CAU, it should be working with scvmm also

    The documentation for scvmm says that either the VM´s are set in saved state or live migrated and as the VM´s that are local cannot be live migrated

    http://technet.microsoft.com/en-us/library/hh882398.aspx

    Maybe you should be using a standalone host that have the VM´s that are not supposted to be on a cluster node..

    or if you must have a vm on a special host use affinity rules

    //niklas


    • Edited by vNiklasMVP Saturday, January 12, 2013 1:15 PM
    Saturday, January 12, 2013 1:08 PM
  • hi niklas!

    thanks for the VERY quick answer! :-)

    i will try the RTM ASAP and report back if and when i get results.

    however: i am not really surprised that SCVMM behaves differently from FCM and CAU - since different dev teams are responsible for each.

    additional standalone hosts are no option for most SME environments (cost, complexity) and shouldn't be necessary either. that would imply having at least two (!) stand-alone hosts aside from the cluster (for two redundant DCs, right?), driving complexity beyond any reasonable measure. (i am hoping that SCVMM's developers have considered this)

    i do understand that SCVMM works as designed (not a bug), which is why i would like to understand the merit of this behavior and/or learn how to change it since i feel it is entirely unsuitable.
    and how exactly would setting affinity rules solve the problem? my virtual DCs are not clustered in order to allow them boot up independently of the cluster service

    Saturday, January 12, 2013 1:35 PM
  • Yes, maintenance mode behavior is inconsistent between VMM and FCM. VMM introduced this behavior in VMM 2012, so SP1 RTM should behave in the same way, I'm afraid.

    Maintenance mode is generally intended to precede actions like hardware changes or patching, so non-HA VMs need to have their state saved before the host is shutdown or restarted. FCM doesn't do this automatically, but VMM does. I'd like to understand what actions you want to perform on the host, so I can better understand why you want to put it into maintenance mode but still have it as an active Hyper-V host? If you still need access to hosted VMs, it doesn't sound like maintenance mode is exactly what you need.

    To automatically repopulate a node after it has been evacuated and brought back online, I suggest enabling Dynamic Optimization on the cluster. That will rebalance the cluster when the node comes back online. If you have specific nodes that you would like VMs to return to, you can set a preferred owner in the VM properties.

    Let me know if this helps!

    Cheers, Hilton

    Monday, January 14, 2013 6:50 PM
  • hi hilton!

    thanks for the clarification.

    very simply put: i would like SCVMM to ASK me what to do with the un-clustered VMs when i enter maintenance mode. or with all VMs for that matter. here are the options i would like:

    1. do nothing - let the VM's settings determine what happens when the host shuts down/reboots.
    2. shut down
    3. save
    4. live migrate

    as a matter of fact, option 4. seems the most obvious to me, since in W2K12 any VM can live migrate anywhere within the domain.

    background:
    case A: my DC's are virtualized, but non-clustered. for them, i'd either prefer option 2. or 1.
    case B: my SQL and DHCP IS clustered, using VM clustering within the VMs. (these also do not use host clustering) not sure how FCS reacts to nodes being "saved" and then suddenly re-appearing on the network - couldnt explore that yet. for them, i'd prefer option 2. or 4.

    come to think of it, "saving" the VMs seems to be the LEAST practical way of handling VMs, due to all the potential dependencies and caveats. or am i alone thinking this? as of now, i'd consider SCVMM's maintenance mode downright risky if not dangerous.

    oh, and BTW: is there any way to SEE if a node is in maintenance mode or not? other than right-clicking it and checking is i can enable or disable MM? MM does not seem to propagate up to the main GUI in any way - it must be hidden away where i cant see it.

    Tuesday, January 15, 2013 12:49 PM
  • Thanks for the feedback Dominique.

    Yes, SCVMM does not have a way to automatically manage migrations of non-HA VMs prior to maintenance mode. Your suggestions are perfectly valid, and your feedback is in line with our own thinking of possible systems to handle this in future.

    Right now the workarounds are to make those VMs HA (in which case cluster migration will evacuate the host before maintenance mode), or to evacuate them manually or via script.

    Thanks

    Hilton

    • Marked as answer by Dominique Cote Tuesday, January 15, 2013 6:28 PM
    Tuesday, January 15, 2013 5:14 PM