No, this is not an article about changing jobs or anything like that. This is an article about the changing directions we have been seeing within the community and companies surrounding cloud and virtualization: a change that signals a new round of innovation and a fundamental shift in thinking. Before, we thought of cloud + virtualization as the bees’ knees. We now realize that cloud + virtualization is just the starting point. Virtualization can safely be ignored within the confines of the cloud.

The last transition point occurred as we got legacy applications to use hardware more effectively by virtualizing those applications, including business-critical applications, off traditional hardware (à la Figure 1). This is now old hat; it has become a thriving business model. However, those community members who cut their teeth on virtualizing business-critical applications (à la Figure 2) have been moving on. Even those who came later, but had skyrocketing careers in virtualization, are moving on. Where are they moving to? Not so far from virtualization. They’re staying within the bounds of the cloud and business-critical applications.
However, it’s a different flavor of business-critical applications now: ones designed for the cloud. We are no longer pushing our legacy applications into virtual machines (Figure 2) but are actively developing the next level of abstraction (Figure 3). Virtualization has become just a part of the stack underneath where we work, and it’s not important to the applications any longer. We’re on a trajectory in which we are creating a computing universe in which everything is run by software, and the underlying hardware is not an issue. Virtualization has become just a part of the expected hardware.
Docker is one such movement, but Docker is a containerization of applications that runs upon an operating system. That operating system can now be placed within any cloud. RedHat has announced Atomic Host, and Scott Lowe, Cody Bunch, and others have been playing around with CoreOS for several years now. These frontrunners are looking for the best ways to deploy applications as a set of microservices on minimalistic operating systems that will run multiple containers easily.
Combining microservices with microsegmentation within the networking layer, à la NSX, is a powerful combination for the future of cloud, virtualization, and automation. The more we move things up the stack (to end up at Figure 3), the more there is to abstract and understand below the current working level, at least until those levels include the necessary abstractions that simplify and automate so we can ignore what’s underneath them.
The hypervisor (Figure 2) abstracted the compute hardware away from the virtual machines. This freed us from worrying about drivers for specific networking, storage, and other devices. The hardware was abstracted so that only a single set of drivers, agents, and other connecting software was required. This allowed tools like Intigua to step in and automate maintenance of those agents not normally found within the operating system, such as those required for security, data protection, and other advanced functionality. Since there was no need to worry about drivers, automation became simplified.
Now, we are at another crossroads. We have moved out to the container, which allows us to ignore the operating system completely. Eventually, we will not need to know if the underlying levels are running Windows, Linux, or something else. All we need to run our application will be within the container.

This gives us an unprecedented ability to employ even more automation. As the underlying levels are hidden by abstraction, we can automate container deployments into any operating system, as everything the application needs is now within the container. There are exceptions. Some applications, like scientific equipment, need direct access to hardware. This needs more thought and more administration, because now we have to pull in a driver to the guest OS, which needs to be presented from the hardware, through the hypervisor, to the VM object, and then to the guest OS. But even this is being abstracted. How? By using USB and other devices over the network. These devices, when added to the mix, require an agent within the guest OS to convert the IP to the device in question.
As we have progressed down this path, we have also pulled along networking and security. We have been able to further abstract networking at each level. We’re approaching the point where we won’t care what gear we use, as long as it transmits bits over cable or airwaves in the most efficient manner. Since Docker bought SocketPlane with its GitHub project to move Open vSwitch into Docker, each container now abstracts the network stack.
This is going to require security companies to move with the times as well. Illumio and CloudPassage are poised to do just that, as is VMware with NSX microsegmentation. This really depends, however, on the existence of a plugin to NSX that implements microsegmentation at the container boundary, not the VM boundary. At the moment, neither NSX nor VMware has specifically revealed such a plugin.
Does containerization imply better resiliency in less-than-resilient clouds, or is this putting all our eggs into just another basket? The recent security fixes to the Xen and other hypervisors have forced clouds to literally reboot all their nodes. This brings down virtual machines running within those clouds, which shuts down entire applications. The clouds get away with this functionality by leaving resiliency up to the customer. We now have to create more instances of our systems, so that if one part of the cloud goes down, another can pick up from the damage. This is where the new breed of meshed load balancers, like what KEMP is deploying, come into play. These load balancers detect dead links and reroute automatically, so you don’t have to handle failover on your own. You just keep multiple instances of your application running in multiple clouds or parts of a cloud.
Why not use live migration or vMotion to preserve VM state and keep the application running? That has been asked a lot. At the current scale of a large public cloud, that level of automation doesn’t exist yet, because you have to wait for a host to fully evacuate before you can do anything with it. This means that, per rack, you have to wait perhaps up to forty times longer than you would with an install and reboot cycle. That time impacts how long it takes to roll out major upgrades, which means that the best resiliency for an application is managed by the tenant.
This leads us to using containers. Having many containers running in many VMs in many clouds grants us that resiliency. The future of administration will be to create an automation layer that understands the different layers of abstraction and to automate all necessary changes. The ideal is to make it so that what happens in the hardware (virtual or not) simply does not matter anymore.
We are not there yet, but we’re getting closer. The community is changing direction. Some are at the forefront and worth following to see where they go. Microservices and microsegmentation go hand in hand, though where they are going is still a bit debatable. We need to watch as networking moves up to the container to determine what’s coming down the pipeline.
Does this mean that tomorrow our mission-critical SAP HANA apps will all be in containers? Not yet. But perhaps one day.
Is your organization changing directions?

One reply on “Are Enterprises Changing Directions?”

Comments are closed.