Will containers change the shoe size (footprint) of the physical servers in the data center? Recently, I was talking with peers about what containers can bring to the environment. “What changes are needed in the environment,” we asked, “to achieve the greatest success when offering containers as an option to customers?” To truly understand the change in thinking about the physical server’s footprint, we first need a basic understanding of the differences between virtualization in general and container virtualization.

Let’s start with the general concept behind virtualization. Virtualization is a method for encapsulating a separate and independent operating system, called a guest operating system, on top of another host operating system commonly called the hypervisor. Hardware resources are fully utilized and are shared by each of the operating systems running on top of the hypervisor.

In hypervisor-based virtualization, everything is done on the hardware level. This means that if the hypervisor has to modify anything in the guest operating system (which is running on virtual hardware created by the hypervisor), it can only modify the hardware resources, and nothing else. The hypervisor is also called a virtual machine monitor (VMM), because it sits in between the guest operating system and the real physical hardware. The hypervisor controls hardware resource allocation to the guest operating system.

What makes container virtualization different from hypervisor virtualization is that container virtualization is done at the operating system level, rather than at the hardware level. With container virtualization, each container shares the same kernel of the base system; each container sits on top of the same kernel, sharing most of the base operating system. This makes each container much smaller and less resource intensive than a virtualized guest operating system. Because of this, each host can have many containers running on top of it, compared to the limited number of complete guest operating systems that can be run on top of hypervisor virtualization.

With container virtualization, you still run workloads in an independent and isolated environment. The container shares the kernel with the base system, and as such, you can see the processes that are running inside the container from the base operating system. From inside the container itself, your view is limited to the independent processes of the container. In case you are still a little confused, let me sum up the difference this way: the basic difference between hypervisor virtualization and container virtualization is the location of the virtualization layer and the way the host operating system resources are used inside of it.

Because container virtualization has smaller and less resource-intensive instances than hypervisor virtualization, you can run quite a few more container instances than you can hypervisor virtual machines. One of the biggest concerns I have heard when discussing this with my peers is “How can we protect ourselves from ourselves?” What I mean by that is that some admins or other end users might keep piling on containers until they completely max out the abilities of the host system. What happens when that maxed-out host fails? Hundreds of virtual instances could be affected.

This raises the following question: “Should we use smaller host systems with the ability to run fewer container instances in order to protect ourselves from taking down the entire enterprise when a single host fails?” This issue has been raised in the past, but it seems to take on more significance in light of what containers bring to the table.

So, I present this question to you. When considering utilizing container virtualization, is it better to shrink the footprint size of the physical host to protect ourselves against doing what we can do instead of what we should do?