If our first post focused on how “sharing is caring” when it comes to containers, this one focuses on the “trust but verify” aspect of containers.
Containers provide a method of allowing multiple resources to run on a single container host OS, but the container host OS puts forth an effort to segment and isolate those containers from each other. This is done a few different ways:
The container host OS projects the files that comprise it into the container. The container then sees these files as if they are local to it, and it reads from them all it wants, but as soon as it tries to write to them, things start to change. The host OS can’t let one container make a change that would affect all its other containers, so that write has to be isolated away. The container writes to its own disk for those blocks that need modification, while the rest of the files are still accessible from the parent. Think of it as a differencing VHD/VHDX/VMDK like you might use for VDI – nothing is written to the differencing disk until it tries to change the source disk.
This is an interesting one when you first look at it. If you go to each container (via PowerShell from the parent), the container can only see its own processes and nothing else. So, if you show processes in the container, you’ll see very few — just the ones needed for the container in question. This upholds our segmentation standpoint. However, the container host can see all of its processes and the containers that it runs.
This is because the kernel is only running once no matter how many containers exist. Since the kernel is responsible for all of the actual scheduling, memory management, etc., it needs to see all of the processes. It sees EVERYTHING that is happening within the containers. This is why Hyper-V containers exist: to provide FULL isolation with nothing shared, because there is a possibility that if things are in any way connected, there could be a way to circumvent the isolation.
Wait… what? Networking? What does that have to do with anything? Well, the way that the container host actually isolates things is by creating a small private network on itself for each container. In the default configuration, each container gets its own private IP address (by default, 172.16.0.x), which is then NATed back to the host. The host then pushes that network traffic out to the network as if it’s its own. So, for example, Container 1 may be 172.16.0.2 running IIS. If I need port 80 to be visible on the network, a NAT rule has to be created to allow external port 80 to traverse through to the container IP port 80, and then I also have to create a firewall rule on my container host to allow the IP in. The traffic then flows as follows:
Public Network -> Container host firewall -> NAT translation to container IP.
While it’s not crucial to know how all the technology we use works at a deep level, hopefully this high-level explanation helps to provide enough information to understand WHY you have to do — and see — certain things while using containers in Server 2016.
Did you catch our earlier posts on Windows Server 2016?