We’ve covered how to keep the steel copper and plastic humming by controlling environmental conditions and designing redundant hardware, but that doesn’t get us very far if we have no actual workloads (services) running on them. In much the same way that the only secure server is the one that is still in the box, the only 100 percent available workload is the one that isn’t created in the first place. So how do we get these workloads highly available? There are a few tricks that we will cover in the coming weeks. The first is server virtualization.
In the last five years, server virtualization has really come to be known as the magic bullet for server workloads. Highly available workloads are one area where virtualization truly does greatly improve function.
Some of the features supported by virtualization include: the ability to move a live, running server workload from one physical server to another, move a running server workload from one physical set of storage to another, in the event of a sudden hardware failure, automatically bring the workload up on another physical piece of hardware, and run the same workload in lockstep on two pieces of hardware at the virtualized machine level.
All of these features are great for keeping the virtual hardware up and running, but what happens when the operating system inside that virtual hardware melts from a system crash (also lovingly known as the Blue Screen of Death) or needs a patch? The hardware will stay up, but the workload will go down due to repair or maintenance; this is where other technologies like clustering and mirroring help maintain highly available servers.
Check back next week for our blog on clustering servers as part of your high availability strategy and contact us to learn more about how we can help you achieve optimal efficiency of your hardware!