Docker is one of those technologies that, without any great fuss and without anyone noticing, is now everywhere. My experience with Docker is fairly recent and fairly limited, but like many people, I had enough knowledge of it that when something complex came up in a project, I thought about Docker, went and investigated it, and came to the conclusion that it would solve that problem. I wouldn’t call Docker a “Swiss Army Knife”—it has so many more uses than that.

The use I have been interested in most recently is the creation of a build in environments that don’t naturally lend themselves to one, such as PHP. Instead of the WAR file being the build (as it naturally would be in Java), the Docker container becomes the build. I can run the container in loads of places including a developer workstation and test and staging environments, and then deploy it into production. Some of these environments are on AWS and some aren’t, but that doesn’t really bother me.

I was trying to understand what it was about Docker that made it so generically useful, and I think it comes down to two things:

  • A container is like a VM or an Amazon machine image (AMI), but it’s properly portable without all the messing around, and it doesn’t carry around all its baggage, so it’s much smaller.
  • You can create and configure a container easily and naturally through scripting, either inside or outside the container.

In many ways, working with a container is the same as working with a VM or an AMI, but in so many ways it’s different, and it’s much easier to get your head around. There’s no concept of choosing an image or firing up a VM, installing an operating system, and using some tool to snapshot it. All of that is just there in the instantiation of the container, which takes one command line, and the “docker commit” call, which takes another one. In a conventional virtualized environment or an IaaS cloud, you are strongly discouraged from accessing the host. Docker works because the host and the container are cleanly and simply linked. Indeed, the container is borrowing as much as it can from the host operating system and sharing across containers. Of course, it only works for Linux on Linux, but I never liked Windows anyway.

At the same time, good technology and a clean abstraction are never going to be enough to explain quite why Docker has crossed the chasm so quickly. So, why has it? The answer is, “it isn’t Amazon Web Services (AWS).”

Now, I know you are saying, “Of course it isn’t AWS; don’t be stupid, it’s not even doing the same thing,” but bear with me here. We all know that Amazon has dominated the IaaS market and has control over how IaaS develops through its AWS APIs, which it can evolve without going through the bother of standardization. Now, it turns out that although PaaS provides services above the IaaS layer, people using PaaS need to be able to create and configure images (or containers) to encapsulate non-standard services. This means that PaaS has to expose an AMI-like layer in order to provide a broad enough range of services to get significant market traction.

From the perspective of consuming layers (e.g., a PaaS), a Docker container performs pretty much exactly the same function as an AMI. As such, you can use it as an interoperability layer that encapsulates the services required to run an application without locking yourself into a particular IaaS. As you build orchestration services around the Docker container, the underlying IaaS technology becomes commoditized without any requirement to standardize APIs at the IaaS layer.

Thus, Docker gives everybody who isn’t Amazon a glimmer of hope that one day, they may be able to deal with Amazon’s AWS market dominance by providing platform-level services without any API standardization. This explains Red Hat’s enthusiasm despite the fact that Docker reeks of Ubuntu and Pivotal currently doesn’t use Docker for Cloud Foundry, though ActiveState has been looking at the integration, and I predict it will go that way.