The latest and greatest thing in the data center is apparently containers. For those of us with long enough teeth to remember the heady days of the early millennium, they look and smell a lot like Solaris Zones.

Containers in their current incarnations are garnering a great deal of attention, especially in the DevOps world, where continuous deployment is the latest word in deployment strategies.

It is said that nothing is new in the world, and with containers, this statement could not be truer. I think, therefore, that an overview of the evolution of the container may be useful.

In this post I am not intending to discuss the benefits or downsides of containers, although I will touch upon them.

Containers, as a concept, have been built on the foundations of the basic *nix process models that form the basis for separation. Now, it must be remembered that *nix processes are not truly independent environments. They do, however, provide for basic isolation and a consistent interface. For example, a process has its own identity and security attributes, address space, copies of registers, and independent references to common system resources. It is these various features that standardize the communication paths between each and every process and help reduce the damage that a wayward process or application can have on a system as a whole.

Also built in are some basic resource management capabilities. These include priority-based scheduling, which comes from the “ulimit” function, used to set maximum resource utilization on things like CPU access and the memory that is locked for use by a process and its descendants, etc.

The next major move forward was the development of Cgroups (control groups). These came out of development work by two Google engineers (Paul Manage and Rohit Seth), and were merged with the Linux kernel in version 2.6.24. Cgroups extended resource management by adding or enhancing the following abilities:

  • Resource limitation: Groups can be set to not exceed a configured memory limit, which also includes the file system cache.
  • Prioritization: Some groups may get a larger share of CPU utilization or disk I/O throughput.
  • Accounting: Measures how much resources systems use, which could be used for billing purposes.
  • Control: Freezes groups of processes and their check pointing and restarting.

To find one of the first examples of the isolation of resource groups from each other, we need to time travel back to 1999, when the FreeBSD jail(2) function was reused with chroot. The  secret sauce was that it blocked off the normal routes to escape chroot confinement. This modification led to the original rise of containers with two companies: SWsoft’s Virtuozzo (now owned by Odin) and the more well-known Sun Solaris version, Solaris Zones (now owned by Oracle). It could be argued that Sun was the first to use the term “containers.” Even though in its technical documentation it called them “zones,” its marketing team used the catchier term “containers” in sales and marketing documentation. IBM also had a version in AIX (its proprietary UNIX implementation). AIX is arguably more advanced than today’s iterations, as a running container could be moved between AIX nodes (vMotion for a container).

Despite some budding interest in containers during the early noughties, containers never managed to make an impact. VMware was also pushing toward data center dominance then, and the problem with containers was that they answered some questions but not all. VMware answered many more questions and more capably addressed the needs of the enterprise.

Containers approached the issue from a *nix-focused viewpoint and ignored the 80% or more of installed Windows devices out there. This marginalized the concept to a very small subset of use cases. VMware won the battle convincingly because it was technical match with enterprise requirements of the time. Enterprises are by nature risk adverse, and the concept of a virtual machine was an easier construct to conceptualise than an application/process-focused solution. The early noughties was the age of the machine.

That said, SWsoft (Parallels) had significant success in the high-density hosting space, where its flavour of containers (Virtual Private Servers) had a fair degree of success. Currently, over 10,000 service provider customers use this solution. But this is likely due to hosting Linux- and Apache-based websites rather than other applications, as well as to the fact that most hosting providers are typically highly cost sensitive. They tend to operate at low margins, and the container concept can enable a very high density relative to a hypervisor-based deployment.

Now I am confused. What exactly is a container? Is it a partition or a workload group?

In fact, it is both and neither.

Okay! How can a thing be something and then not something at the same time? This is not quantum physics, you know!

This is because a container, just like a partition, spoofs the application or process running in it into thinking it is running on a separate and independent OS image. Containers, just like the workload groups that they evolved from and subsequently have extended, have only one copy of an operating system running on the physical or virtual host server.

So to answer the question of whether a container is a lightweight partition or a reinforced workload group, it all boils down to interpretation. In all actuality, containers possess features of both constructs. In fact, it may be better to think of them as Frankenstein constructs—“enhanced workload resource partitions,” if you will. However, that is a bit of a mouthful, hence the catchier term “containers.”

Now, the term “container” resonates because it captures meaning. A container, in common parlance, contains an item, and an IT container contains a process or application. “Container” resonates better as a term than the current Windows-based construct “Application Virtualization,” which is often likened to a container; note I said “likened to” as opposed to “compared to,” as the Windows construct does not have any resource management capabilities built into it.

So, what can containers actually do?

Containers “virtualize” an operating system. Well, that is not quite correct; the applications running in each container believe that they have full, unfettered, and unshared access the OS. This is analogous to but not the same as what VMs do when they virtualize at the hardware level. In the case of containers, it’s the OS that does the virtualization and maintains the illusion of sole access. Hence, a container is more analogous to that of an Application Virtualization construct. I know the purists will now wave their pitchforks and call for the burning of the heretic, but the analogy almost fits. Application Virtualization on Windows tends not to have any resource management built in.

Now, because of these features, containers can have a very low overhead. This is due to the fact that they run on top of a single copy of the operating system. Depending on the process or application they are running, they can consume very few system resources. Consequently, they require far fewer resources than workload management approaches that require a full OS copy for each isolated instance. But more on this later, as VMware does not completely agree with this tenet.

A container solution normally has a lower management overhead (more on this later as well), as there is a single OS that needs to be kept current with security patches and bug fixes. In an environment running containers, when patches are applied and the system has been restarted, all containers automatically and immediately benefit from the updates. With other forms of partitioning, such as hypervisor-based virtualization, each virtual machine’s operating system needs to be patched and updated separately, just as it would if were still running on independent, physical servers. The ability to update multiple working environments with a single pass has been seen as a critical benefit in hosting environments, but it has also often been seen as a negative in much more heterogeneous enterprise environments. That said, there is an enormous caveat: there is a massive assumption that the containers running on the underlying OS do not have any library version dependencies. This is one of the major sticking points in moving from development to production. There is a vast difference between “Well, the code runs here on my laptop” and what is considered a production environment.

What containers don’t do very well is provide much, if any, additional fault isolation for problems arising outside the process or group of processes being contained. If the operating system or underlying hardware goes, so go its containers: that is, every container running on the system. However, it’s worth noting that, over the past decade, an enormous amount of work has gone into hardening the Linux kernel and its various subsystems. Nevertheless, if a patch is applied to the underlying operating system by the operations team to fix a security fault, there is a possibility that all or some of the containers that are running on that machine may fail to work anymore due to a library mismatch. Currently, containers cannot move between container hosts, which is a shame, as this problem appeared to have been solved by IBM in the early noughties on AIX.

NOW, THIS IS IMPORTANT

Seriously, folks. If you are moving to a container-based environment, your current waterfall strategy will not cut the mustard. Your development teams must be involved with operational decisions, so that their code can be modified to run on the new build before it is released. This is one of the major tenets of continuous delivery. Your operations team will need to push your changes upsteam to development, so that any new code deployed will not fail in production because different libraries or kernel versions are in use in production or staging. Development needs to keep operations informed of all dependency requirements and any changes to those dependencies, so that it can change staging and production to match, thereby limiting potential dependency issues. In short, you will need to break down those silos, not just between teams at the same level, but between functions like QA, dev, operations, and yes, even security.

Following are some comments about containers that can generally be considered to be true:

  • All the containers on a single physical or virtual machine run on a single OS kernel. (Note I say “OS kernel.” To date, there is no Windows version that is production ready. Microsoft does have a tech preview in its Server 2016 Beta program).
  • The degree to which the contents of any given container can be customized is dependent upon the implementation or deployment.
  • Most patches that apply across the containers will be associated with an OS instance. This opens a potential for container failure due to library mismatch dependencies.
  • Resource allocation management between containers is fast and has a low overhead, simply because it’s done by a single kernel managing its own process threads.
  • The creation (and destruction) of containers is fast and has a lower overhead than booting a virtual machine. It is more akin to starting an application.
  • Currently running containers cannot be moved from one running system to another. Therefore, build resilience into the process or run on virtual machines to take advantage of vMotion, etc.

Why has this construct that was beaten into the ground in the early millennium suddenly gained momentum again?

At its most simple, it is because of PaaS (Platform as a Service). This version of a cloud more closely resembles the constructs of the high-density hosting providers that bought into SWsoft than those of the enterprise, which align more with IaaS (Infrastructure as a Service).

High-density providers like Rumahweb and Hen’s Teeth provide Apache instances to their customers to run their websites on. Many of these are hosted on the same physical server. In a PaaS environment, customers are provided with an environment to install their application.

With PaaS, protections are built into the application layer. The workloads lean more towards scale-out, stateless, and loosely coupled deployment strategies than traditional scale-up, stateful, and tightly integrated deployment strategies of the traditional enterprise. These new cloud native applications are designed to run in a hybrid cloud environment and by and large use abstracted languages such as Java, Ruby, or Python rather than compiled languages such as C# or C++.

These design principles have some positive implications. For example, you generally do not need to protect the state of individual instances by using clustering or live migration techniques. Further, your operational management is simplified, as you do not want have a highly disparate set of underlying OS images to manage. Ideally, you only need to manage a single documented build that all developers, testers, and operational staff utilize.

PaaS as a construct has explicitly abstracted away the underlying infrastructure. You do not care what is there, and all that matters is that your application stack will run. This will, if properly provisioned, enable the rapid creation and deployment of applications, with auto-scaling across local data centers and remotely at your hybrid cloud partner. This vision of IT is what has led to the resurgence of the container as a valid construct for the delivery of workloads. This is because of the high densities that can be achieved and the ability to rapidly allocate resources upwards. More importantly, it is due to the ability to rein that back in again as and when the business’s desires or needs drive it.

Therefore, it is no coincidence that the “Solaris Zones” construct has risen from the dead. Polly has indeed woken up. Containers are most definitely a match for this cloud-defined world and for PaaS deployments in particular.

Now, this is not to say that they’re going to replace virtual machines in your data center: they are not. In fact, the vast majority of container workloads will be running on a virtual instance, either on-site or remotely.

Docker instances will benefit from an underlying virtual machine host to aid in uptime.  VMware has announced project Photon, a lightweight Linux OS that provides support for the most popular container formats, including Docker, Rocket, and Garden. Project Bonneville offers a hypervisor hooking to provide microVMs to host single container instances.

I would therefore argue instead that they are a great complement that need not, and most probably should not, try to replicate the capabilities of a virtual machine, as VMs were designed with completely different use cases in mind. Containers and VMs will happily cohabitate in the data center for many years to come. Neither will oust the other, as the path will be physical to virtual to container/application, much like salt and pepper.

 

 

2 replies on “Containers: Innovation or Evolution? Will They Rule the World?”

  1. Tom, thank you for the great article. I believe live migration and stateful containers are very important topics. I completely agree that stateless containers can make life easier for ops guys. However, it does not allow to migrate a lot of legacy applications to a cloud based on containers, to a PaaS. It forces dev guys to redesign applications. And as a result it slows down adoption by enterprises.

    Actually Virtuozzo (VZ) implemented live migration many years ago. Moreover VZ team has open sourced this solution – CRIU (Checkpoint/Restore In Userspace). And now it’s in upstream kernel criu.org/Upstream_kernel_commits. We use heavily this feature in Jelastic from 2011. It helps us to solve a lot of issues related to stateful, old way clustered and legacy apps.

    Jelastic is a container-based PaaS. And it’s different compare to other solutions on the market, because it does not force developers to change design of their apps. They can use both approaches, which means they can move their apps to Jelastic PaaS w/o code change and later they can improve it step-by-step migrating to stateless design if they need it.

    Once again, thank you for the raising of this interesting topic.

Comments are closed.