In a typical week at Mirazon, we are often called in to evaluate a customer’s virtual environment. The conversation often has a statement similar to “things just FEEL slow sometimes…” and we dive into the plumbing to figure out what “slow” means.  When we do the initial design and deployment, we have many different best practices, and field notes, and design conversations to ensure the best possible implementation and performance. However, when we’re called in to be a “plumber” we have to start at the beginning and gather a lot of information to learn the environment.

Recently, we found a great Hyper-V Processor blog article from the Hyper-V Program Manager Ben Armstrong. This has a very useful PowerShell script, and also gives some reasonable guidance on how to properly size the ratio of Virtual Processors to Logical Processors in the physical virtual host. The script is as follows:

write-host (@(gwmi -ns rootvirtualization MSVM_Processor).count / (@(gwmi Win32_Processor) | measure -p NumberOfLogicalProcessors -sum).Sum) “virtual processor(s) per logical processor” -f yellow

And for our 3x Hyper-V 2012 Hosts, the output looks like this:


According to this article, the CPU target is 4:1 Virtual Processors per Logical Processors.  The Virtual Processor is what is assigned to the Virtual Machine itself.  The Logical Processor is a core on a physical processor/socket in the virtual host itself.  You can see we’re at 0.25, 0.96 and 1.08 respectively across our three hosts.  This information is helpful to gather so we can properly evaluate, monitor, and explain it to the customer. If we see a ratio higher than 4:1, that can be a quick pointer to explain why “stuff is slow” to a customer.

A little more information gleaned from this article, other Hyper-V best practices, and our own field notes concur with a ratio of 4:1. And of course, for sizing you should not consider hyperthreads as logical processors. All hyperthreading does is widen the control set for each core and add 20-30 percent of performance assuming the application is written properly to take advantage of hyperthreading.  A useful explanation of this is less technical terms would be an example of a drive thru. At our local Chick Fil-A they have two ordering lanes but still only one window. This means that they can take more orders more quickly, but there is still only one “processor” to do the work.

So, to do a quick sanity check of the math, we did a count.


But the LCPUs category clearly includes Hyper Threads. That’s not what we want nor does it accurate reflect the ratio we want. We just want CORES and not all THREADS (which could skew expected performance metrics in an incorrect way). We had to dig into the WMI query and pull a different metric, NumberOfCores vs. NumberOfLogicalProcessors and the new PowerShell Script looks like this:

write-host (@(gwmi -ns rootvirtualization MSVM_Processor).count / (@(gwmi Win32_Processor) | measure -p NumberOfCores -sum).Sum) “virtual processor(s) per logical processor” -f yellow

And for our 3x Hyper-V 2012 Hosts, the output looks like this:


That’s better. This is a common situation which could have led to erroneous expectations. It appeared we were WAY UNDER the expected metric of 4:1. But, in reality, we’re approaching that metric on hosts VM01 and VM03. Finding a better WMI query leads to better results which leads to more realistic expectations and improved design and performance.

Got a handle on Hyper-V and PowerShell? We’re looking for engineers like you.