I want to preface this post with a quote from a movie to set the stage. Bonus points for guessing the movie the quote is from.
Captain: Define “hoe-down.”
Ship’s Computer: Hoe-down: A social gathering at which lively dancing would take place.
[AUTO appears near the captain]
Captain: AUTO! Earth is amazing! These are called “farms.” Humans would put seeds in the ground, pour water on them, and they grow food—like, pizza!
The voice-activated personal virtual assistant has become a mainstream technology. You’ve probably used one, and you’ve certainly seen them used in advertising. Virtual assistants like Alexa, Siri, Cortana, and Lifehacker, just to name a few, are becoming more common as time goes on. They are probably in your home, and on your smartphone. They’re everywhere we go.
Where we don’t see much of this kind of technology is in corporate IT. At least not now. That may be changing as DevOps practices shift toward chat-based and voice-guided virtual assistants for a variety of daily operational tasks: monitoring applications, provisioning virtual machines, increasing compute resources—you name it. For all practical purposes, if you have a workflow that can automate a common task at hand, then from an extremely high level, all we’re talking about is the addition of another kind of input and output device. I know there’s much more to it than just a new input and output device, but there are eager DevOps teams focusing on it now.
A common obstacle in DevOps shops is that teams are usually distributed into different areas, whether those areas are physical or logical locations. Voice-activated automated bot activities, by which you can execute commands as though you’re chatting with another person, seem very appealing. An interaction with a voice-activated assistant is expected to seem as close as possible to an interaction with a real person. The goal is for users to speak their commands and receive the assistant’s output in the form of a voice instead of text.
I believe there will be a number of steps along the way before we get to the point of having voice interaction with our data center automation. The first steps will be having the automated bots present and available in a group-chat type of environment, where commands can be given in the chat channel and the automated bots’ responses will appear in the group chat for all to see. During most major IT outages and other crises, stakeholders and primary technical people are usually assembled in a group chat as well as a bridge call. Having the ability to work together from the chat room itself could have a clear and distinct advantage in resolving crises by facilitating greater speed and increased collaboration while utilizing automated services. Speed, collaboration and automation are some of the main pillars of the DevOps philosophy. Think of the possibilities, especially when it comes to the automated code itself. The voice or chat bot can perform a query on which team member checked in that section of code and issue notification to that person instead other team members’ needing to comb through logs and events. Is there a better way of resolving an issue than knowing you have the right person looking in the right place? Faster than a speeding bullet?
It appears that some companies are starting to jump on this bandwagon. Meet Davis, the Jarvis for DevOps. Dynatrace is a company that provides application and performance monitoring solutions. Davis, the virtual assistant it has launched, can answer specific questions about the health and welfare of the endpoints it is monitoring, obviating the need for people to look through a series of dashboards or logs. Davis has the ability to speak, via Alexa. Users can also chat with it via Slack to ask questions like “What performance problems are there?” or “What is the user activity?” These are just a couple of examples of the possibilities.
What about the compute resources or virtual machines? Tintri is using Slack and Alexa to bring automation into the data center with vision into the cloud infrastructure. Utilizing Slack or Alexa, a system engineer can ask Tintri’s Tintribot to provision X number of virtual machines, take snapshots, and complete other lifecycle or second-day operations for the server from cradle to grave.
There is so much potential that we should soon see more companies with their own version of a personal assistant as a product offering for their portfolios. The first real question that comes to my mind is “How well are the different virtual assistant technologies going to work together? Will they be able to interact as a team, or will we end up with yet more distributed separated systems? If the different solutions present robust APIs for interoperation, the possibilities could be endless.