This is an article gleaned from an exhaustive TechNet forum post. 

This covers the basics and depth of virtual CPU, CPU sharing / time slicing, and should aid in the understanding of how hypervisors present CPUs to virtual machines.

This has been formatted in a Q and A type of format to divide the topics and group items.

Host CPU vs assigned CPU

Q:  "Some one told me:  "If you assign the 4CPUs to the VM, the VM will use all CPUs of host computer (windows 2008 R2 enterprise server).  It is better to assign two CPU for VM and will not tie up the host's CPUs. I am confused and all VMs should run the separate CPUs than host.  I do not see the performance hit when I assign the 4CPUs to the VM."

A:  Actually, this is all well documented in the Hyper-V documentation:

A physical CPU core is controlled by the hypervisor and this is divided up into virtual CPU cores.  It is these virtual CPU cores that are presented to the virtual machines (and used by the virtual machines).
A virtual CPU is not a one to one assignment - it represents time.  It is a representation of time on the physical CPU resource stack.
Also, the number of virtual cores that can be assigned to a VM is limited.  Today there is a maximum of 4 (and that depends on the OS in the VM).
So you will see performance hit if you are running multiple process that are taking all processor time

Also check


Logical Processors assignment

Hypervisor virtualization basics a visual representation

Set processor affinity Hyper-V

The statement is incorrect in the way he / she has explained it to you.

Hyper-V is a full hypervisor (in spite of the fact that it looks like Windows at the console) and therefore it operates just like hypervisors have for the past 10 years.

See also:

And one last comment, avoid processor affinity it causes more problems than it solves (in the majority of cases).


Q:  So, 4 vCPUs are not equal to 4 CPUs of host.  If I assign 4 vCPUs to the VM, it certainly will not affect host?

So, can we say one vCPU is equal to one logical processor of physical host? (ex: one Quad processor = 8 logical processor)?

A:  When dealing with hypervisors it is easier to think in cores and virtual processors than it is to think in cores, logical processors, and virtual processors.

In reality what really matters is cores and virtual processors.  Each core very safely support 8 virtual processors.  Logical processors are abstractions of time of the physical processors and really muddy the thinking as they apply to a physical machine one way and to a hypervisor in a different way.
Can you go beyond this number?  Oh, yes. Very easily - and as long as you don't have CPU intensive virtual machines you can do it safely as well.
The thing to plan for when moving to virtual machines is to plan hypervisor capacity in a scale-out type of way.  If one hypervisor get too crowded you add another, and / or pick a VM to move off to the other hypervisor.  Thus the overloaded hypervisor re-balances and your new one picks up some work.
So, you need to plan in the flexibility to move VMs (either with Live Migration or by powering them off) between hypervisors to balance the work.
Managing hypervisors and VM workloads is still an art, not a science - as there are so many variables that affect the workload, the VM OS, the hypervisor, and the hardware.
In the end you will be far more productive with two hypervisors than you would be with one huge one.

8 CPU vs 4 vCPU

Q:  If one core can support 8 virtual processors and if I only assign 4 vCPUs to VM, it does not make sense that the consultant's statement that I will use up all processors?

If I have one core, the CPU shows 8 logical processors in the performance of the task manager If I assign 4vCPUs to one VM, the CPU shows 4 logical processors in the performance of task manager.  So, if vCPU is not equal to a logical processor, can you explain why there is a difference in the performance of the task manager?

So, one vCPU is equal to one core?

From this link

A virtual processor does not necessarily have to correspond to a physical processor or to a physical CPU core. Microsoft recommends that you maintain a one-to-one ratio of virtual processors to physical CPU cores. On top of that, I recommend that you reserve at least one CPU core for the host operating system. This means that if you stick to the recommendation, a server with four CPU cores could host up to three virtual machines, with one virtual processor each (because one cores being used by the host operating system). You can also host two virtual machines, one with two virtual processors in one with one virtual processor.

A:  1 vCPU does not equal 1 core - there is no one to one relationship (at all) with any hypervisor (not with Hyper-V, nor XenServer, nor ESX)

Let’s do the Hyper-V math...  The capacity planning guideline is 8 vCPU per CPU Core.
One CPU (dual core) = 2 Cores  x8 = 16 virtual CPUs.
16 vCPUs / 4 vCPU per VM = 4 VMs
16 vCPUs / 2 vCPU per VM = 8 VMs
16 vCPUs / 1 vCPU per VM = 16 VMs
(These are 'suggested' for planning not enforced)
Task Manager should be showing you the number of 'cores' in the CPU usage graph.  Hyper threading (falsely) shows double in Task Manager, but folks don't talk of hyper threading much as it has been pretty much replaced by multi-core.
This is a case where you need to know your hardware.
Thanks for including the Petri link.
I see the MSFT recommendation being over-generalized here.
The Petri article statement of: "Microsoft recommends that you maintain a one-to-one ratio of virtual processors to physical CPU cores" 
And the MSFT statement (in the article Petri references): "When running a CPU intensive application, the best configuration is a 1-to-1 ratio of virtual processors in the guest operating system(s) to the logical processors available to the host operating system."  Are not equal.
First of all we have the term "CPU Intensive" - I have seen very few CPU intensive applications in the enterprise.  The most common things I have seen are:  Terminal Servers, Exchange, SQL - which I would say fit this description.
Second: the Petri article refers to "cores" and MSFT refers to "logical processors" - and we just went through an exhaustive process of explaining the differences.
Now, let me get really technical on you..
If you have a VM with 1 vCPU and your host has more than one Core. That single VM will (technically) share in the processing resources of all of the cores (watch my video about this).  It gets round-robin'd between all of the cores.  The parent partition is the only element that is parked on core 0.
Virtual processors are all about time, and the amount of time spent on the CPU and the amount of time not spent on the CPU.  This is why a VM does not run as well or as fast as the same system installed on bare-metal (on bare metal there is no time on / time off the CPU - well, technically there is but it is at the application level not the OS level).
Again, I discuss this and visually represent this in my video.
In the case of hypervisors a Logical processor does equal a vCPU.  However, a logical processor does not equal a core, nor does Task Manager show you the number of Logical Processors that you have. I have stopped using the term "logical processor" when I talk of hypervisors.  It is a very technical term that means something to software developers and something totally different to systems administrators.
Ben (the Hyper-V PM) describes it this way:
This is a single execution pipeline on the physical processor.  In the "good old days" someone could tell you that they had a two processor system - and you knew exactly what they had.  Today if someone told you that they had a two processor system you do not know how many cores each processor has, or if hyper threading is present.  A two processor computer with hyper threading would actually have 4 execution pipelines - or 4 logical processors.  A two processor computer with quad-core processors would in turn have 8 logical processors. Q:  What is different between logical processor and a physical processor? And what is the relationship with the number of cores?
Q: What is a quad core server? On what parameters it depends i.e. like whether it is depends on number of processors (virtual or physical) or any other?

Above BrianEh mentioned that: "One CPU (dual core) = 2 Cores  x8 = 16 virtual CPUs

What does this really mean?  Why are we multiplying with 8?

Also he said one Dual Core CPU= 2 cores, let’s say if I have a Quad core processor then whether the Virtual CPU count  should be 4*8=32 virtual CPU's ,is it right then what is the count of Logical CPUs?

While creating VM's what we will use either Logical CPU's or Virtual CPUs?


A:  First of all - logical processor = virtual processor (vCPU), they are one in the same. 

logical processor = virtual processor (vCPU)

In the context of this discussion I will only use vCPU / virtual processor.
Cores are hardware. Cores are the number of processors within your physical CPU chipset.  (Multi-core processors)  This has happened within the past five years due to advances in processor manufacturing.  Prior to this we had hyper threading.  Prior to that we had only single core processors (we generically called them CPUs).
Sockets are hardware.  Sockets are the number of processor sockets that your motherboard has.  And, sockets are only potential.  What we really care about is the number of processors that are installed, as a socket could be empty.
A motherboard with 2 sockets can support 2 processors.  A "dual-core" processor = 2 CPUs.  A quad-core processor = 4 CPUs.
The number 8 that used is the recommended calculation for capacity planning (circa 2009) - 8 virtual processors per core.
A common 1U server might be:  2 - quad core processors.  2 * 4 = 8 CPUs
These CPUs are subdivided into 8 virtual processors (vCPUs).  8*8 = 64 vCPUs
This number of 64 can be used for planning the VM capacity of your hypervisor in relation to processing power.  As it all depends on how your VMs are configured as each decrement from this total of 64.
Now, the fun hitch to all of this - these are guidelines.  Could you create enough VMs where your total vCPU count > 64?  Yes, you can.  And they can all run at the same time too.
The statistics are good; however, that you will run out of RAM or hit diminishing returns with disk IO before your host will run out of processing capacity for virtual machines.

Physical to virtual calculation

Q:  So if I have 4 physical CPU’s, then the no of virtual processors I should have is 4* I correct?

Can I allocate these 32 CPUs to one virtual machine?  And what will happen I want to add some other Virtual Machines?  Assuming I have enough RAM Capacity.

A: There are limits to the number of vCPUs that a single VM can have (these are based on the OS installed in the virtual machine).
The fewer VMs that a hypervisor has, the more CPU time the VMs get (and thus the better user experience).  The more VMs that a hypervisor has, the less CPU time the VMs get.  This is because CPU time is a finite resource and all VMs must share (nicely) in using that finite resource.
Your statement of "Logical Processors: 8" is the same number that I am using for my calculation.  I don't know where you are getting your number as this is generally not a number given by a hardware vendor.
If I work your configuration backward...  sockets = 2, cores per processor (socket) = 4 therefore physical CPUs = 8.  Hyper-V then divides these 8 physical into 8 logical processors (which is also referred to as a vCPU) for a total of 64 virtual processors.

Virtual cores

Q: From what i am reading it seems like it should use all 8 cores, although from what i am reading in other parts of this forum it doesn't.  I understand that the virtual cores are basically an allocation of time to the entire 8 cores?

I just have a software developer telling me that his software will only use 4 cores if presented with the 4 virtual CPU.  I said yes it will seem that way to the OS, but under that it will be actually using all 8 cores as the load is spread across them all.

Am i correct in that assumption?

A:  yes. All 8 physical cores are used, however the VM can only be allocated a (current) maximum of 4 vCPUs.

A physical core gets divided into 8 vCPUs (these are technically logical processors and therefore represent processing time on the physical stack with a maximum of 8 concurrent threads executing on a physical processor at any one moment in time).
As threads execute (vCPUs are used) they are cycled around the physical CPUs.
It is possible for a VM to get 'stuck' on a single physical CPU.  Applications like CPUBurn never stop processing and maximize the threads within the VM.  The result is that the execution never has a breakpoint and therefore the hypervisor never gets a chance to cycle the work to the next processor. This is an example of a 'bad' application that does not virtualize well.

See Also