Most companies screen for intelligence and experience in potential recruits, but Google also looks for “Googliness,” a mix of passion and drive that is difficult to define, but on the other hand can be pretty easy to spot. What Google has found is that these qualities often come intertwined with a desire to use technology to make the world a better place, and to help Google do the same.
Google encourages these “intrapreneurs” to develop and sell their ideas to the larger team, and a great many of these ideas end up being developed into successful ventures. This workplace environment, in which creative impulses are backed up with engineering and managerial support, fosters that passion and drive to define and create the next best thing. And now, from this “Googliness,” comes Kubernetes.
Before I delve into Kubernetes, let me offer a brief overview of how Google looks at and works with its cloud computing platform. All of the applications and services that Google has running inside its cloud computing platform are what Google likes to call “containers.” Google Search, Gmail, and Google Plus, to name just a few, are all running in Linux containers. Now, to put things into perspective, each week Google launches more than two billion container instances across the globe in its various data centers. It is this container structure that gives Google its reliable service availability with a very efficient way to scale.
Now, before this, there was Borg. What is this “Borg”? It’s not your run-of-the mill Star Trek Borg, although resistance is still futile. As described by Wired,
“Borg was the sweeping software system that managed the thousands of computer servers underpinning Google’s online empire. With Borg, Google engineers could instantly grab enormous amounts of computing power from across the company’s data centers and apply it to whatever they were building—whether it was Google Search or Gmail or Google Maps.” (Cade Metz, “Google Open Sources Its Secret Weapon in Cloud Computing,” Wired, June 14, 2014)
Google currently uses Omega internally, but the concept is the same. And now Google has released this technology to the open-source community via GitHub, as the open-source container cluster manager called Kubernetes. Resistance is futile.
Its name based off the ancient Greek word for the helmsman of a ship, Kubernetes is touted as an easy and efficient way to run applications distributed across legions of machines. One of the best things about this tool is that it also lets you manage applications that are running on other services, like Amazon. It will work in the private cloud space as well. This heterogeneous tool has the potential to play a role in both the public and private cloud space, which also just might entice more customers to Google’s cloud.
The Kubernetes announcement came on the heels of the announcement about support for Docker images inside the Google App Engine. A Docker image, for lack of a better word, is a container that developers can use to build and deploy applications via a set of extensions. These are also used to access a library of images and, when needed, get assistance from the Docker community to deploy these containers into a managed environment.
Is this approach the most efficient way to manage resources and deploy applications? For the applications that have to scale in a big way, it seems the most logical approach, except for the Linux shops, which might not have the desire to merge more technology and complexity in their environments.
The future, however, is ever changing, and time will tell.
Hi Steve,
This one looks interesting. I’ve been trying to work out how these container clustering and orchestration technologies relate to PaaS solutions. Do you have any thoughts?
Mike