DockerCon 2017 was about modernizing traditional applications, or MTA. MTA is the lifting and shifting of traditional Microsoft Windows base applications into Docker containers. Its approach is reminiscent of 2009. For Docker to grow into brownfield data centers, this is a must. However, could it be doing more? If so, what is it doing that could be improved? MTA is a must for many organizations looking to Docker to manage everything, but not everything uses the same approach. Containers are about agility, with workloads being treated like cattle. Can traditional applications be treated this way? We shall see.
MTA is about the lift and shift of Windows workloads into Docker containers. It is about taking a Windows image, whether that be a virtual or physical image, and placing the entire image into a container. It is not about migrating applications from Windows to Docker. It is about the entire operating system plus whatever applications are already there. In essence, Docker is telling folks, we can do Windows applications, but we treat them as virtual machines. Now, your Windows 2008 application can be easily lifted and shifted to Windows 2016 using Docker. However, I begin to wonder about several items here:
- Why are we migrating virtual machines into Docker? When 2008 and 2012 are no longer supported, they will not be supported even if they run in containers. In addition, at the moment, most container hosts happen to be virtual machine based. That is the way it is in Amazon, Azure, and IBM Bluemix/SoftLayer. I am confused about the need to lift and shift entire operating systems just to run them and others in a container within another virtual machine.
- Why aren’t we using tools like ThinApp or App-V to create containers of applications and then running those within Docker? I think this would be a better approach, as all of Windows is not required.
- Docker claims to be about migrating the application, but in fact it is about migrating the operating system. The first task in its approach is to identify the application. Okay; my application consists of four clusters each of between three and seventeen nodes, with MongoDB, MySQL, memcached, Apache, custom code, POSIX message queues, mail servers, custom services, and servers, as well as accounting (this is a real application). How does Docker plan on identifying the application or even what services exist? You still have to do this manually—or more to the point, you just have to know. But what if that is institutional knowledge that just walked out the door three months ago? Do the people at the organization today know if the mainframe is in use (not for this application) or how data protection is done (very complicated in this case). Or even who knows all that information?
MTA, in my mind, fails at the first element of its approach. It does not lnow anything about the application, but rather, yet another subtle statement that we can replace virtual machines. Since Docker container hosts run within virtual machines, I am not sure why this is still the message. Bare-metal Docker is not exactly rare, but it is not used within any public cloud I know about. To be more than the next generation of applications, Docker feels it must play in the traditional application space. I think this is a mistake.
Docker is about the future, not the past. It should be targeting ways to migrate brownfield into containers not by massive lift and shift, but by determining services that can be containerized. It should by now be able to identify 90% of the applications (some will always require institutional knowledge and perhaps even consultants). Within that 90%, determine what services make the best use case for migration to containers. Start simple, even small, and then go from there. MTA is about modernizing, not having yet another management interface. There are some aspects of MTA I like, however:
- The new credential spec for Windows containers that it has within PowerShell. This alleviates the need to share secrets everywhere and cause multiple attack points. Instead, the credential spec approach communicates with Active Directory. Docker, please make this available for Linux containers as well.
- The new hub with Docker certified containers. This means that Docker looks at and inspects the containers for issues. On the scale of Google Play to Apple App Store, Docker claims this is closer to Apple App Store. This gives me some hope that Docker Hub has secured containers. However, this does not alleviate the need for your own due diligence.
MTA is in a current draft and is an approach that needs some work. I would rather it be about the future than the past. The first step is the hardest: identify the application. In ten years, we seem to have made no progress on this. It is a difficult task, yet there are companies that can get us 90% of the way there. That is a start. The rest will end up coming from institutional knowledge. How does Docker propose to capture that knowledge?
The ecosystem should be leveraged to do this. The tools do exist, but no one has put them together yet. Would you move your Windows images into Docker, just to manage everything via Docker while still managing the underlying virtual environment, whether it is in a cloud or not?