Container technologies and developers work with applications. End users use applications. Yet, administrators think about the systems that make up the applications with tools that are not application-centric but rather system-, VM-, or container-focused. Because the tools are not focused on the application, the definition of the application is unknown by those who support the application. This is in serious need of changing. In fact, until this changes, a business cannot transform into the next generation of cloud-native applications. It just will not be ready. So, then, how do we get ready?
First we need to break the barriers and use tools that understand the application in part or in its entirety. But even to get to the point where we are picking tools, IT needs to focus on the application. Think about this from the perspective of the end user. The end user does not care if they do not have enough licenses, a system has crashed, or a cable has been cut. To them, all of these issues lead to a dead and unavailable application. This means that in the world of IT as a Service, the end user wants to order an application, not its component parts.
We may have an application that requires an Oracle license, API interfaces into seventeen different microservices, and deployment onto three clouds using 1,000 containers within 1,000 virtual machines, but the end users simply do not care about all that. They just want that application to work. IT folks care, but what the end users are looking to find out is how well their application is doing, not how its component parts are. Let us think about it another way: do we buy an engine, four wheels, an exhaust pipe, and a trunk separately? Or do we buy a vehicle? Granted, I can customize some things, but I still end up buying a vehicle.
IT as a Service functions the same way. Given this, we need tools that also work with applications. We need to think about reporting not on a virtual machine’s issue, but on an application’s issue. Today, we use virtual machines. Tomorrow, they will use containers or something else. At that point, we will need to look at the bigger picture: the entire application.
That is the key: the big picture. However, we need tools that first look at the application and then allow us to drill down into component parts, such as storage, networking, compute, and location of the failure or issue. “But,” you say, “we have that—we have it at the container or VM level.” Of course we do, but how does that translate to an actual application failure or to the business? If, for example, an issue is within a database, do we know how many applications are impacted? It could be only one, it could be hundreds, or it could be many more if there is a cloud database involved. Or it could just be your instance.
By looking at the application—by defining the application—we have more information with which to work. However, that definition needs to be determined automatically. Why? Because there can be many legacy components with many moving parts that we may not know or even think about. As we move more to the cloud, the location of those components may not be something we see every day. Our definition of the application needs to span clouds, data centers, and locations. Tools such as VMware vRealize Infrastructure Navigator (VIN), ExtraHop, and others can provide upstream and downstream dependencies for any Virtual Machine or System. VIN can even provide a list of microservices in use. Yet, these tools are hardly used for these purposes. These tools can begin to give a definition of the application in an automated fashion.
With a solid definition of the application, we can move this into a blueprint for use in designing new aspects of the application. Such blueprints can then be used by other tools, such as Ravello, GigaSpaces, VMware Application Director, and many more. In addition, blueprints can be used by data protection, security, audit, and many other tools to get a handle on each and every application.
Companies like SIOS, Cirba, ExtraHop, and others seem to be moving in this direction, given the latest set of demonstrations; yet, we are not quite there yet. We need the source of truth for each application. Given the move to cloud-native applications, the definitions of the applications should be well known. That source of truth may be the Jenkins, Ansible, or Vagrant server in use. While a blueprint is the start, the end is what is deployed automatically and the location to which it is deployed. Location is not usually detailed within a blueprint, and location may change with fluctuating costs of clouds, legal requirements, and politics.
Without a good definition of the application—one that is not human-generated—we do not know all the dependencies, and as such we cannot truly understand how to fix all the problems. We can no longer guess at a root cause analysis; we need to know where to look. We need to understand the application and what it is telling us, and correlate that to the VMs and containers, and to any underlying dependencies or any order.
As application deployment complexity increases, our view has to shift to become broader. For that, we need great tools. We have a start now. We need to keep driving for an application-centric view of IT and cross those silos.