Discussion
 High-Level Overview
 Vendor-agnostic

(what are these boxes?)

There are different types of virtualization. This topic primarily focuses on 'machine' virtualization - a VM that runs within a container on some type of virtualization engine (Hyper-V, XenServer, ESX, Virtual Server, VirtualPC, Parallels, etc.). Another dominant type of virtualization is session virtualization - this is where Remote Desktop Services (Terminal Services) falls into place. Another type of virtualization is not new but has been slow to gain hold is application virtualization (App-V and XenApp Streamed applications are good examples).

For everything dealing with machine virtualization (e.g. Backup and Restore, Security, Availability, etc) you usually have to choose among two approaches. The first one is: “Treat every VM as it was just an ordinary piece of hardware”. This approach is typical for infrastructures that have just started adopting virtualization or decided to stop with very limited virtualization penetration. It has the following pros and cons.

Pros

  1. Employs skills and software you already have (as you already used them with physical servers in your infrastructure). That
    • allows for quicker start and
    • usually cheaper (in cost of time and skills).
  2. Each VM operations are as independent from other VMs and the host as possible.
    • No problems with security and delegation.

Cons

  1. TCO is higher than optimal due to significant performance overhead
    • especially when all the VMs consume the same resources and services or perform the same tasks at the same time.
  2. Some advanced scenarios may be not possible.
  3. Software costs may be high (you often need to pay for every VM individually).
  4. Support costs may be high (each VM can have its own unique issues).
  5. Operations costs may be high
    • especially if you want to workaround issue #1 and manually adjust complex schedules.
  6. Provisioning and decommission of VMs is slower (as you need to set up all the services for every VM).

The second approach is: “Take advantages of advanced Virtualization features and offload as many jobs to hypervisor (or its management partition) as you can”. This is typical for infrastructures with higher virtualization penetration and strong trust to their hypervisor of choice. This approach has the following pros and cons.

Pros

  1. TCO is closer to optimal as most of the operations are maintained and intelligently scheduled by hypervisor (or software running in the management partition). Performance is more equable so less resources are consumed at each point of time.
  2. Some advanced scenarios become possible (e.g. restoring the whole VM with its customized virtual hardware settings, content and even running applications).
  3. Software costs may be lower as you need licenses per host server (or per physical CPU) and not per VM (or virtual CPU).
  4. Support and operations costs may be lower as you have unified approach for all (or groups of) VMs.
  5. Provisioning and decomission of VMs is quicker as all necessary work is done only at the host level.

Cons

  1. Require specialized skills that may increase training and support costs and increase deployment timeframe.
  2. Require significant one-time investment in specialized software (e.g. virtualization-aware software and/or more higher SKUs of hypervisor itself).
  3. VM operations are tightly bound to host-level features.
    • Host-level failure affects all VMs.
    • May interfere with features like Live Migration.
  4. Delegation model is highly complex that may cause additional security and compliance questions.
  5. This can cause high utilization in the management OS that has a negative impact on the environment.