There is a recent CVE (CVE-2016-9962) that directly affects container security. A patch was quickly forthcoming. This raised some interesting concerns. Specifically, how do you patch a container infrastructure? What needs to be patched? The “what” is easy; the “how” is more difficult. As we move to cloud-native applications, where we tear down apps rapidly and restart them from whole cloth, patching is a crucial issue. There is risk here; the question is how to mitigate such risk. How do you patch for future issues? This was the subject of the virtualization and cloud security podcast this week.
There are two approaches to patching containers, container hosts, and services.
- Patch the container or container host directly, employing methods used for other operating systems and applications. However, this often leads to configuration drift and possibly redeployment of the same vulnerability already patched.
- Patch the repositories or pick up from patched repositories that make up the container host and container bits.This often requires more work, but it will ensure your new containers contain the necessary bits to avoid the need to patch for old vulnerabilities.
The first option is a continuation of what we do today for most operating systems and applications (or pets). The second option is specific to cloud-native applications and container-based applications (or cattle). The pets vs. cattle debate has an impact on security, patching, and risk as well. How much of an impact depends on your organization.
Let us look at the two ways of patching against a base container-based system used within many clouds today. Yes, containers today within AWS, Azure, and other clouds run in virtualized container hosts. Yes, VMs are involved. From a security perspective, that is actually a good thing. Please listen to the podcast for more on that subject.
Normally, to deploy containers, we either recreate the container host and then put containers within the host, or we just deploy containers to existing container hosts. These methods each require a build server. The build server pulls data from both code and artifact repositories. The build happens, creating a new artifact or container. Then, using Infrastructure as Code automation mechanisms, the container is deployed in a well-known way every time. Automation rules this particular approach.



Use of mandatory access controls with whitelisting will provide remediation for today and into the future. Patching needs to look at not only today, but also tomorrow. How can we remediate other such attacks in the future?
Let us know your thoughts, and have a listen to the podcast.