Containers are something that people tend to really struggle to understand. It’s a layer of abstraction and virtualization that we haven’t directly dealt with much before. However, there is a pretty simple way of understanding it, but it goes back to some definitions we’ve not explored in a while. To start with, what is a virtual machine (VM)?
Everyone knows a definition of it, but how often do you stop and think about what it actually is? It’s such a common term, it’s just taken as an understood concept. Also commonly called virtual hardware or a virtual server, a VM is the result of a piece of software taking physical hardware and creating a software entity that appears to be dedicated hardware. To an operating system (OS) installed on it, it appears that it has full control of the hardware, that it can treat it like normal hardware, etc. This is through the process of virtualization. A virtual reality is created for the OS such that it appears to have its own dedicated hardware to do with as it wants, but in reality, some clever software underneath it is abstracting the hardware and giving it only as much as it is allocated and needs, when it needs it. This is where the terms Virtual Machine, Hardware, or Server come from.
Containers as Virtual OS
Containers have a new-to-most-of-us concept called a Virtual OS. On the face of it, most people assume that a virtual OS is simply an OS installed inside a virtual machine, but that’s not really true. A virtual OS is basically the same thing for an OS that a VM is for hardware. The operating system itself is virtualized, such that each application appears to have a dedicated OS that it can use as it wants. In reality though, some clever software underneath is taking the OS and only giving the application as much OS as it needs, when it needs it (sound familiar?). The operating system itself is now virtualized and faked to the application.
The Path from Physical to Virtual to Virtual OS
This continual layering of abstraction gives us more and more operational efficiency. Originally we had loads of physical servers, each of which may not be that busy, but aside from just installing a whole lot of application on the same instance of Windows, there wasn’t a way to be more efficient. Quickly everyone learned that having multiple roles on a single server lead to stability issues, compatibility issues, and long-term upgrade nightmares.
To combat this, we started carving the hardware into multiple virtual machines, which let us keep the single application per instance of Windows, and still utilize the hardware more fully by running many segmented instances of Windows on that hardware. This still isn’t the most efficient, however, because if you need to run a lot of IIS for example, you can only have one instance of IIS running on the server. IIS may only consume a hundred megs of RAM and minimal CPU, but you have to devote a whole running OS instance to it, which consumes 30 to 40 times as many resources as the application. You can have multiple application pools, with multiple websites inside that IIS, but they’re still running on the same IIS service and are therefore somewhat inter-dependent.
This is where virtual OS comes in. By virtualizing the OS to the application, you can now have many different IIS instances running on that one operating system. Each IIS container gets its own IIS service that runs independent from the others, and independent from the OS itself. That means if one crashes or needs restarting, it doesn’t impact the others. Now rather than having to devote the overhead of a whole Windows instance just to get another isolated IIS, I can devote a couple megs of RAM to the overhead of the container, and not reduplicate the entire Windows OS.
In the below image, where the same hardware is used but simply working through the virtualization paradigms, we get more and more instances of IIS, and greater efficiency: