“Sharing is caring!”
At least, that’s what we were always told growing up. Windows Server 2016 containers are (despite the name) a sharing feature. First, let’s start with the general premise. Containers allow you to create isolated applications that are portable and self-contained, all the while sharing a root OS. There are two ways of doing this, with host-based containers and Hyper-V based containers.
Host-based containers share a core operating system (disks, running kernel, etc.), and then run in smaller isolated instances of an application based on that host VM. If something happens to the host VM, it will affect all the containers on it and cause them to break as well. It’s made for scale-out redundancy, not instance redundancy. It can protect against a single instance of an application running amok, but can’t protect against the container host OS melting.
For example, say I want to deploy a new IIS site hosting my buggy, main line-of-business app and it crashes all the time. I can spin up one big Windows server with a lot of resources, but every time that website crashes I have to go manually restart IIS, which could affect the other IIS sites. I could also build two big Windows servers with a lot of resources, which gives the ability for the other to pick up load, but I still affect other websites by having to restart my server’s IIS and force a failover to the other one.
That’s where containers come in. A container host is installed once and prepped to be a container host. Then isolated containers are made off of that host to run specific apps. These containers run independently, but share the host OS. I could install IIS in a container and not have it installed in the container host OS.
With containers, I could build those two big Windows servers, build multiple IIS containers and isolate my buggy main app from all the other websites I need to run, without having to build out several full servers. Depending on which choice I made for my OS, that could be a lot of resources and storage savings for a bunch of servers that are just doing the same thing. Additionally, I could limit resources as necessary for my buggy main app’s container so that it didn’t eat all the resources of the system when it crashes.
Further, once the container is established, I can move it from one Windows container host to another one. If I wanted to later put a different server in place to run a container, I simply have to move the container, reattach it and it’s good to go. The container can only see its own processes, but the container host can see the processes from all of the containers, which shows that there ARE shared resources. This is where the second type of containers come in, Hyper-V containers.
With Hyper-V containers, not only is the file structure and application containerized and kept separate, there is also a dedicated streamlined OS for each container. This allows for full isolation away from other containers, which, obviously, sounds like a VM… because that is what it is. However, it’s custom made to run container applications that could also run in the container host configuration. The container can be moved from fully isolated Hyper-V containers to container hosts and back as needed, simply by turning it off and reconnecting it.
In the next post, I’ll focus on how containers, while sharing resources, remain segmented and isolated from each other.