I had the opportunity to attend Red Hat Summit and DevNation. Nearly every answer to any question at both these events was to “use containers” to solve that problem. While some responses were undoubtedly true, others were not quite as completely true. Yes, you can use containers to solve many problems, but what was often overlooked were the underlying bits of infrastructure necessary to provide the base for the containers. Overall, Red Hat Summit delivered on its promise; I will follow up about DevNation at a later time.
Red Hat has a comprehensive cloud story these days, as depicted below. CloudForms and OpenShift are major players within the suite. OpenStack is an add-on, as you do not need it with CloudForms (which used to be ManageIQ). When you look at this suite, you can see that Red Hat has added in the ability to increase security all the way around.
OpenStack has its own management security, and that increases; however, the rest of the workloads and stacks require a good solid architecture. Ultimately, Red Hat has decided that SELinux and Linux cgroups are the way to go with security around containers. Specifically, it has integrated into Red Hat Atomic Enterprise orchestration that will deploy SELinux-based controls based on the policy and tags setup within CloudForms. Those tags could then be used within the containers deployed by OpenShift as well as the underlying Atomic environment. Since Atomic is an operating system, it can and usually will run within a virtual machine upon the KVM hypervisor.
OpenStack itself sits beside this environment to provide management of the virtual machines, while CloudForms manages the workload instances as containers. When you add in Red Hat’s Identity Manager and other security related tools, you have a pretty compelling single-source story. It is still missing quite a bit in the networking arena, but that can be supplied by NFV within Open vSwitch. Yet, how do you segregate workloads? Open vSwitch does not contain much security, just networking. For that, you need to look elsewhere, such as by using the built-in iptables firewall within each Atomic instance, managed by someone like Illumio or CloudPassage to add one more layer of security.
Scale requires orchestration, and that is what Red Hat Atomic Enterprise with Satellite is all about: orchestrating the deployment of an operating system with just enough bits to allow you to run a container, whether that is Docker or something else. The goal is to move everything to a container delivery system for applications. This goal is also shared by Docker, and perhaps one day we will get there. On a 100% Red Hat (Linux) environment, we are approaching that capability.
For those who deploy Linux at scale, deploying a consistent image has always been an issue. This has given rise to companies like Puppet Labs, Intigua, and others to build an orchestration level to raise the substrate of the data center up from the hardware through the hypervisor to the operating system image. The golden fleece at the moment is a common operating system image. Containers allow that to happen, the myriad changes to operating systems being contained within the container, not spread out throughout the operating system as they are now.
Red Hat has a compelling story for how to secure and deploy the modern virtual and cloud environments. Unfortunately, you need to look outside to secure and deploy non–Red Hat environments. We are still in a world where Windows is a player, and Red Hat’s answer to that is to deploy its stack on Windows using—yes, you guessed it—virtual machines. The underlying hypervisor really does not matter. You could use Virtual Box, VMware Fusion, Hyper-V, etc. But to deploy Red Hat’s stack, you still need a virtual machine. Containers make the guest operating system a common layer, just like hypervisors made the hardware a common layer.
Where will we go next? As abstractions move up the stack, will we end up with just one platform and language?