In this ever-changing world of IT, the legacy of today was once the future of yesterday: namely, hypervisors. Hypervisors are now considered legacy, even though they are seriously underutilized due to issues with fear, uncertainty, and doubt around using these resources to their fullest. The new technology is containers. However, where are the operational tools to support containers? Where are the procedures? Where are the developers who understand distributed systems? We are moving toward containers at lightning speed without answers to those questions and many more. To move to containers today, we need a strategy.
A strategy is required to move from virtual or physical machines to containers, but that strategy does not need to be as far-reaching as one might expect. You may not need to completely retool your IT department, if you stage in containers where appropriate. One of the new legacy hypervisor companies has actually given us a strategy, one that considers these enterprise questions. The strategy could be considered the following:

  • Put our legacy applications on legacy hypervisors.
  • Integrate knowledge of containers into our management tools.
  • Use a set of tools or platform to deploy containers using existing service management practices.
  • As the tools catch up to containers, deploy them even more densely than we do now.

Or the strategy could be as simple as the diagram below. If you ignore the VMware-centric systems, you could put together any set of tools, and hypervisors. However, VMware already has a set of tools in beta to help with this problem: VMware Integrated Containers, Photon OS, and VMware Photon Platform (ESXi + PhotonOS + Photon Controller).

Photon Platform Migration
Click to Expand
The key to the above strategy is to keep one container to one virtual machine. If you go past that limit, the tools do not yet exist to manage the environment sanely: we go back to guessing games and an inability to track the real cause of an issue. By using what we already have, we can migrate to a containerized world fairly quickly with minimal impact to existing systems.
But aren’t containers are supposed to run dense? Of course they are, but so are hypervisors. Most hypervisors are running at 20 to 30% utilized on modern hardware. They can run more. So, running lightweight VMs with just enough operating system to run the container does not add a huge amount of overhead, and it allows the use of existing tools for troubleshooting and management.
Existing tools offer the following for containers:

  • Multi-tenancy, which is very important when moving containers into and out of clouds. Containers themselves do not have the concept of multi-tenancy today. They require some external source to provide that. This gets extremely difficult to do when more than one container runs on a given VM or physical system, unless such systems are dedicated to a tenant and there are external controls around the tenant. Current virtual machine–based clouds grant multi-tenancy.
  • Metrics are important for finding the root cause of any issues. A 1:1 mapping of container to VM allows the use of existing performance management and other tools to gain metrics into an environment, to interpret those metrics and allow one to act on them. Such actions could be to spin up or spin down containers based on need as quickly as possible. Photon OS and other container-designed operating systems take seconds to boot and start the containers within them, almost as long as it takes to start a container natively.
  • Planned downtime is also important to ensure underlying systems, firmware, etc. are upgraded. The current live migration and storage migration tools allow enterprises to manage their hardware and infrastructure layers with minimal downtime. Yet, the distributed nature of containerized applications allows for business continuity if those same systems were to fail without foreknowledge. The question then becomes, “Are there enough distributed resources available to handle the workload?” The metrics discussion helps with these failures as well.
  • Security is becoming much more of an issue, and the security of containers is behind the times. Even though companies like CloudPassage, Twistlock, Docker, and others are working toward this, the security of virtual machines is well understood. While the security of virtual machines is underutilized, it is possible to secure the network, the virtual machines, and all parts of a hypervisor-based stack efficiently and effectively.

Moving to containers does not imply you should drop your existing tool suite. In fact, when we moved to hypervisors, the same was also true. We just needed to adjust expectations, adjust sources of usable data, and use what we had to its fullest. Containers may allow us to reach higher densities, but we still need to manage, herd our cattle around, and keep the lights on.
For this, we can use our existing tools, but only if we maintain a one container to one VM rule. This is the rule VMware wants to focus on. It is a good focus to get us from where we are today to where we want to go tomorrow. VMware’s strategy is fairly solid and fits an enterprise: use what we got until the tools chain changes. We are in the midst of that change. Using our existing tools until they are ready is a strategy that allows us to move forward. Our new mantra:

Use what you have as you move toward the future; plan to change as tools become available.

Perhaps as we move, that future will appear faster than expected. I expect VMware Integrated Containers and VMware Photon Platform to be a large part of this evolution.