We in IT love our buzzwords and the next best new thing. But am I really the only person who cannot see the point of containers? I mean, those of us who were working in IT during the early noughties at the birth of virtualization in the enterprise will well remember containers—sorry, Solaris Zones—from Sun Microsystems. We should also remember that the questions they were supposed to answer were better answered by the then-newfangled technology called “virtualization” from a little-known upstart company called “VMware.”
I fondly remember having long conversations with my friends (programmers for Solaris) about the benefits of containerization, as it was called then, and a Type 1 hypervisor. History proved that virtualization was a better technology at the time, but here we are, less than ten years later, and it seems that the beast has reawakened.
We are now having the same conversations, the same arguments, about why containers are better than virtual machines, namely:
- They are great for CI/CD, today’s hipster term for development lifecycle and testing.
- They are great for fast provisioning: I can just spin up a container in seconds.
- They are great for rapid de-provisioning: just delete the container.
- They are great when you need to limit resource usage.
- They are great when you need to isolate services and applications.
- They are great for application security.
Let us investigate each of these points in turn and see which are still valid today without changing our environment and maybe even while retaining usage enhancements.
Containers are great for your test and development environment
One of the major use cases that the proponents of containerization posit is that of rapid development and testing in DevOps environments. This is most likely true if you are only developing for Linux environments, but what about those who are developing for OSX, Android, or even Windows? It seems to me pretty limiting or counterintuitive to have two separate development environments, for Linux and the rest of the world. Why increase your operational needs by having to manage two separate environments?
Containers are great for fast provisioning
Container adherents claim that a container can be spun up faster than a virtual machine. Now, this statement may be true when you are talking about traditional non–VASA enabled storage, but I am pretty sure that I can provision and power up a new instance of a machine from template in seconds with a modern array that has VAAI and VASA coupled with linked clone technology. Also, I get the benefit of multiple guest OS support.
Containers are great for rapid de-provisioning
Well, not much to say here. How quick is right click–“remove from inventory” or “delete from disk”?
Containers are great for limiting resource usage
I often hear that one of the major benefits of containers is that they run on bare metal, and therefore 100% of the machine resources are available to your containers, without any hypervisor overhead. Let us look at this comment more closely, because it seems like an oxymoron. You want 100% of your machine resources, and then you containerize the application. Okay, so I use a resource pool on my vSphere Cluster, and then I get the benefits of vSphere High Availability (perhaps not that useful in a containerized world, as your application should be providing the resilience layer), but you get vSphere Distributed Resource Scheduler (DRS). This is very useful; I can move my machine to a less-utilized host to gain performance. Try that one with a physical workload.
Containers are great for isolating services and applications
I understand the concept of LXC (LinuX Containers). I also understand that you can have an RO environment and a WR environment that hosts your container. But sorry: this is not anywhere near as good as VM isolation. The others can isolate (to a fashion), but you share binaries and in many cases libraries, too. Application-based virtualization provides a similar solution and works for Windows-based applications. Products like App-V and ThinApp fill this gap in the Wintel world. This technology does not appear as mature in isolation as App-V and to me seems more akin to a layering technology. Perhaps Unidesk or Mirage could fill this gap.
Containers are great for application security
Well, they are if they are used in conjunction with other security methods, such as mandatory access controls. Root is Root is Root. It matters not from a Linux perspective whether that root is run from the base OS or the container. Unless it is protected, I can still pull your rug. LXC is not enhancing the environment, and neither is Docker or any other technology. Maybe under a general user’s account level you have greater protection, but not when you are looking at permission elevation using su or chroot.
What, if any, are the benefits of containers over and above traditional virtualization?
This is perfect consultants’ question. If you are running a private cloud, or even traditional virtualization, then I would argue “not many,” as you can get the performance and benefits of containers with your current technology sets without the overhead of retraining your admin staff on another technology. I believe the same can be said of hybrid clouds; in this case, you will be spinning up your public instances as needed. The question becomes a little more grey when you are looking at public cloud, where you are charged per instance. AWS and GCS charge per VM, and spinning up a new VM to test a code function in these environments can get expensive very quickly. Here I can see the benefits of a container-type environment. There are obviously still limitations, at it is Linux-only. Yes, I know Microsoft has committed to using Docker technology in its operating systems at some point, but that point is not now, and I have yet to be convinced of containers’ benefits over app virtualization in this particular space.