Any part of any infrastructure, application, or cloud is data. Data is used by applications, and myriad data is presented to IT organizations for their use, edification, insights, and more. But what really is this data? Can we classify the types of data in some way? Data classifications should not be just “structured” and unstructured”; they must go deeper than that. To understand how IT operations analytics (ITOA) can act on data, we first need to classify data into something we can comprehend. ITOA leads to insights that can be used to predict capacity, track applications, and tell us when we have security events.

Data falls into several categories:

  • Key Performance Indicators: These are often numbers that show the status of a given application, perhaps representative of application stability, throughput, successful served resources, etc. They differ for every application. For one application I know, there is a single number per minute that tells the company whether it is making or losing money. Such numbers are tracked per day, over time, etc. New Relic and others use this data.
  • Network Data: This type of data includes everything from flow data about the networks in use to pure packets. We also include data retrieved from switches and other network devices in this category. If the application is communicating, this data not only contains the communication, but also the information about the devices through which it runs. ExtraHop thrives on this data.
  • General System and Cloud Logs: This type of data reports on the system, often with informational messages about healthy data as well as about failures. Often in this data we find logon requests and failed queries among other general system information, good and bad. Log analysis was invented to handle the influx of this often confusing and convoluted data. Splunk was originally created to handle this type of data.
  • Infrastructure Data: This data tells you the health of your infrastructure, whether virtual, cloud, or physical. We use this data to plan future capacity as well as to manage configurations. Zenoss, VMTurbo, Cirba, and others manage this data and produce refined versions of it.
  • Audit Logs: These can be logs from a cloud-aware security broker, the cloud itself, or some other mechanism. Included in this type of data are logs from AWS CloudWatch. Any tools that audit the system, including other IT operations tools, generate this type of data. Many virtualization security tools combine several features. One of the common features is an audit of the virtual environment against a hardening guide or even compliance. HyTrust, Catbird, and others produce this data.
  • Application Data: We can also look at the metadata that surrounds application data, such as SQL queries, file names and placements, etc. There is a host of available data to analyze. New Relic and SolarWinds thrive on these types of data.

What can we do with this data? That is why Splunk, Elasticsearch, Prelert, and other products exist: to help get a handle on your data. This is not as easy as it sounds. In order to properly write rules for these tools, you still need to understand the data. This is one reason why the preset dashboards that are part of Splunk, VMware vRealize Log Insight, and others are such a help. The people who created the dashboards knew the data well.
This is where a community around one of the tools comes in very handy: it can help create the dashboards you need. Unfortunately, however, not all dashboards and analytics are created equal, and the data we need to analyze is not part of a dashboard. So now, we are back to needing to understand the data and the tools in order to create our own views of the IT world, and to make use of tools to aid us in ITOA for our environment.