There are many folks who consider AI to be the end all of analytics. I could not agree more. However, are we at AI yet, or is there more to AI than we have today? Artificial intelligence has been a goal for decades. What has changed to make it potentially possible? How does it apply to our current IT and business problems? Is AI nothing more than machine learning?
Some will argue that machine learning is within the definition of AI. Strictly speaking, they are correct. However, there is AI and there is machine learning. So, what are the stages of AI?
It all boils down to what you want to ask the data you are analyzing more than anything. For example, if you just want a count of occurrences of X, that is not even machine learning: it is a simple counting algorithm that scans the data for X and counts the number it sees. Would you consider the word count within Microsoft Word to be AI? Most people would not. However, would you consider grammar checkers within Microsoft Word to be AI?
They are not. Grammar checkers and a host of other applications are rules-based engines. In other words, they apply a bunch of semantic rules to determine if there is an issue. There is no true understanding of the data. In the security world, we call that a signature-based approach. This has been the main security capability behind antivirus for years. So, if we just introduced understanding of the data, would that make this AI?
In some ways, it would appear to be AI, but in many ways, it is still just rules that are applied. Early on in my career, I was asked to program a rules engine for the farming community around nitrogen within soil. There were no fancy algorithms, just rules that were applied: a decision-tree approach. It worked, and the farmers were impressed enough to use the technology. However, we did not understand the data more than enough to apply the rules.
What happens when we start to understand the data? The first axis that is easy to understand is time. There are only so many hours in the day, etc. Therefore, we see that many things are now analyzed against time, such as with the time-series approach to analytics. Time is the easiest element of data to understand programmatically. We are combining a rule now with counting algorithms to give analytics over time. The rules about time are pretty much immutable. The rules surrounding the items we can count are not immutable, and they give us another axis within the graphs we display to better understand the data. This in itself is still not AI, but progression. We are now combining rules with mathematical functions to better derive the data into something useful, usually by applying another set of rules on the results.
Those rules are often thresholds, but they can be other algorithms that compare two sets of time-series data together within a window of time. Threshold-based approaches end up being nothing more than comparisons, but they are an important step in the progression to AI.
Now comes the tricky part and a real shift in direction. By saving all the upper and lower values over time, we can learn what is normal and what is abnormal. This means we now have saved our highs and lows, and even information on whether they violate some threshold. Further, we automatically adjust the threshold such that we know whether or not the data is normal for the system we are analyzing. When we spot a pattern that does not match, we can do all sorts of things. I call this first-order machine learning.
First-order machine learning looks for patterns within the data over time to determine whether what we are seeing is normal or not. We are now learning about the data so that the questions we ask of the data make sense with respect to the data. However, this approach and threshold approaches often lead to false positives, so we need to refine our machine learning even more. We could add more elements, or we could approach the data differently. Many false positives show up due to single occurrences of spikes over the window of analysis. We end up not knowing if that was normal behavior or not—that is, unless we look at a broad enough swath of data. That swath could be years’ worth. This is where most log analyzers, such as Elasticsearch, Splunk, etc., sit.
So, instead of looking at one window of data, we apply a second-order function to the data and look at patterns. They are still based on time, but the pattern within one window of time is compared to the same window of time over the years. We end up looking at less data overall, but we have a great way to smooth out and remove the false positives without losing data resolution. We have now learned more about our data than we ever have before. This is where performance and capacity management tools have gone, such as SIOS, Turbonomic, etc.
However, this is still not AI. It is a form of machine learning, and it is still algorithm based. Would a third-order approach to machine learning be AI? It could be, as the difference between AI and machine learning is subtle. Whereas machine learning has a basis or bias, AI does not. Initially, there are algorithms, but in essence, an AI can be trained to know what something is, and then it can spot that something in the data without human intervention. This is where ImageNet and other things like it come in.
In fact, ImageNet and its logic are the basis for self-driving cars today. The jump from machine learning to AI involves recognition of what is actually in the data and allowing a natural set of responses. Think of how we humans look at data and intuit what we are seeing. AI is like that. The time series is no longer important, but the data in its entirety is.