10 Common Virtualization Missteps

By August 29, 2017Virtualization

common virtualizationIn all the years we’ve consulted in the virtualization space, we’ve seen environments of all kinds of shapes and sizes. We have the unique privilege of seeing a high number of virtual environments in distress doing what we do — and we’d be remiss to not admit we’ve learned the hard way on a few ourselves — so we’ve learned to look out for lots of different virtualization missteps.

Here are a few we frequently see in no particular order and why they cause issues:

1. Not building enough nodes in VSAN or Storage Spaces Direct

Best practice for both VMware and Hyper-V is to have four nodes. Reason being that this is to get you to be truly fault tolerant. While you can feasibly have two or three, you lose that true high availability.

Say you were doing maintenance on one node, then you have a single point of failure while you have that host in maintenance mode. If a host fails during that maintenance, you’re at risk for losing all your data. Additionally, you’re at risk for not having quorum. With four nodes, you will also get more storage capacity, since with any less VMware and Hyper-V require additional storage space to attempt to provide data redundancy.

2. Oversizing guest VM hardware resources

For example, if you follow your vendor guidelines to a T when your workloads really only require half that, you could be doing yourself a gross disservice when it comes to having to license extra cores, etc. Check out our blog for more information on how to size servers, and our blog on CPU utilization.

3. Removing virtual hardware without cleaning it out of the guest OS

It’s a common occurrence to forget to wipe the information out of your guest OS after you remove your virtual hardware or when you migrate from physical to virtual. If you don’t, you can have conflicts with settings and configurations. Here’s a great blog on removing orphaned hardware devices – it may be a bit older but it’s still relevant.

4. Being unfamiliar with any part of your environment: network, storage, etc.

When you virtualize, all of the elements of your environment play closely together, so brush up on networking and shared storage and so on. Because everything goes into one place, you have to be able to troubleshoot each layer. Virtual switching and how it integrates with the physical network, for example, is a common area that gets overlooked.

5. Not configuring MPIO

Multipath IO (MPIO) ensures high availability for your storage paths. Without it, you risk not being able to utilize the redundancy that has been created in the environment.

6. Ignoring virtual assets when doing inventory

When you’re keeping track of your assets, they aren’t just physical devices. Your virtual servers are assets just as much, and it can be easy to lose track of them the larger you get. You may end up banging your head against the wall trying to figure out how to log into a virtual server without having readily accessible documentation with the login information (not that we’ve experienced that first hand or anything).

On top of that, imagine you lose your entire virtual environment, not having the virtual assets inventoried means you might have to make it all up again from scratch.

7. Trying to load balance iSCSI traffic with vSwitching

This really just doesn’t work and you might actually end up taking performance hits. MPIO is made to do this job. Contact us if you have more questions about this.

8. Not keeping your virtualization tools up to date

Without your virtualization tools up to date, you’re missing out on driver updates and patches, which can drastically impact performance or even cause issues. We’ve seen issues like memory ballooning being caused by outdated virtual tools.

9. Using jumbo frames

It is highly unlikely you’ll enjoy enough benefits to offset all the time and effort configuring these. It’s also very easy to misconfigure them and then seriously decrease performance instead of increasing it. You can read more about jumbo frames on Brent’s blog.

10. Not offloading A/V to hypervisor

You can suffer a ridiculous amount of guest overhead if you run A/V on each virtual machine instead of on the back-end hypervisor. When the A/V runs scans or updates, if there are multiple instances on a virtual server, you can suffer severe IO storms or massive CPU utilization. We’ve seen large SANs brought to their knees for hours at a time due to this very thing.

Yes, there are other licensing implications to configuring it this way, but it can be a significant source of your virtual environment’s performance struggles.

We wrote a follow-up with 12 More Common Virtualization Missteps that you might want to read!

If you have questions about any of the above or about VMware or Hyper-V in general, send us an email or give us a call at 502-240-0404!