The rise of virtual desktop infrastructure (VDI), server-based computing (SBC), and now cloud computing has imparted new significance to the importance of monitoring your environment. The user experience now reigns, and avoiding negative impact on the performance of the user’s session is one of the key goals behind any enterprise IT solution. We’ve been forced to adapt, moving away from an infrastructure-focused monitoring standpoint that concentrated on back-end systems like networks, storage, and databases, and toward a more holistic solution that measures real-time performance on the user’s endpoint device. Users’ homes are now laden with devices, such as iPads, that provide a slick, smooth, and consistent experience that they can compare to that provided in their daily jobs. It is small wonder that concentrating on the monitoring of the client endpoint itself has become a priority for many.
The Vendor Pack
Of course, naturally, a plethora of vendors are springing up to fill the gap. I can name off the top of my head a whole host: Lakeside Software, eG Innovations, Aternity, ExtraHop, Nexthink, ControlUp, and many more. Even Microsoft’s infrastructure-focused behemoth System Center can provide endpoint monitoring, if configured correctly.
But this raises questions. First, a business has to buy into these solutions. Building the business case is crucial, and it often isn’t sufficient to simply show the benefits for a particular project. The monitoring solution of choice has to apply to the business as a whole, it needs to be utilized by the entire IT department, and it needs to demonstrate proper ROI. Some of the solutions mentioned above are expensive, and to justify that sort of outlay, there needs to be a compelling case that gives value across every aspect of the enterprise, especially if you’re buying a solution with ongoing support costs.
Second, any solution has to be configured to monitor the appropriate KPIs and then maintained and tuned as time goes on. For overstretched IT departments, this is often a bridge too far. Setting up the software to monitor the required metrics can be a large task in itself; anyone who has ever installed Lakeside SysTrack and then taken a quick look in the console for the first time and not felt daunted is either an expert in it, or a liar.
You also need to configure baselines, to set appropriate metrics so that you can assess whether the performance of your infrastructure is declining or not. Where do you find these—by raising a finger in the air and taking a guess? What are the metrics that matter most to your user base?
And then, of course, you need to tune the system, tweak it, maintain it, and adapt it for new software or hardware. Do you have the skills to do this, or will you end up paying for expensive external resources, either as contractors or from a consultancy? What about upgrades? What if you need to add new dashboards to monitor new software solutions? Implementing a monitoring solution can quickly become so expensive in terms of capital outlay that it runs the risk of seriously wiping out the projected ROI.
All of the solutions I’ve seen over the last few years suffer in that there’s no real killer feature differentiating one from the rest. All of them do a great job, and some have some stand-out features. Nexthink gives you fantastic visualizations, Lakeside can monitor pretty much anything that uses electrons, and Aternity allows you to record custom metrics from within a user session. Each one has its pros and cons. But none of them really address the problems I mentioned above.
It’s worth noting at this stage that cloud workloads, although they are expanding, are still really suited to particular areas. We’ve had no problem moving email and IM and other collaboration features into the likes of Azure and AWS. Now, monitoring—that’s probably another strong contender to push into the cloud, no?
Predictive Insights and Analytics
What I really have liked the look of is a custom monitoring service provided by an Australian company called Insentra. It’s built on existing technology, but expanded and customized into a managed service that it calls Predictive Insights and Analytics (PIA). The idea is that you install an agent onto the systems you want to monitor, specify the KPIs you want to see in the dashboards and reports, point the agents at the cloud service (although you can have it on-premises if required), and you’re done. That’s it: no learning, no huge design and implementation phase, no requirement to manage or tune the system. You just view the alerts and react to them as necessary.
What makes it even better is that because it’s a managed service, you could adopt it simply for the lifetime of a project and then stop paying for it. This lets you avoid the problems of building a business case by sidestepping the huge costs associated with one of the aforementioned monitoring solutions—you just pay for what you need. This is another feature that makes PIA stand out from the crowd of other monitoring solutions.
The “predictive” nature of the PIA service also means you can get alerts based around performance degradation rather than actual failure. If one of your KPIs—for example, logon time—suddenly starts degrading, you can receive the alert before it becomes noticeable to the users and take whatever proactive action is necessary to correct the situation. Dashboards don’t need to be specifically configured in-house: you just provide Insentra with the KPIs you wish to monitor—as few or as many as necessary—and they are then built for you and made visible when the service commences.
I think that the Insentra PIA service, at the moment, has the “killer” features that set it apart from the other monitoring solutions that are often pitched at enterprises looking to enhance their user experience. It doesn’t need huge capital outlay, it doesn’t need lots of skills, and it doesn’t require huge amounts of time to implement. You don’t have to build a wider business case, and when you’re finished with it, you can just stop paying for it. Right at this moment in time, that’s a whole host of advantages that make it rise above the rest. I like all of the other products I mentioned—they all have great features—but when it comes to trying to get an enterprise to take the need for monitoring seriously, a solution you can plug in and turn on with a minimum of resource and expenditure really stands out.
Your point seems to be that monitoring from the cloud is the way to go. Thats fine – but i really dont see why Insentra’s service has the killer features. Most monitoring vendors offer cloud-based access. ControlUp, LogicMonitor, eg Innovations, NewRelic all offer cloud options. Many vendors even offer pay per use options for on-premise deployments if you want. Very strange that you have picked one service provider and branded them as providing killer functionality without much detail!
In all my time working with the likes of Lakeside and other vendors they’ve always pushed very much towards the on-premises deployment. I can only assume this is because the maximum revenue is (or has) been made this way. There are a variety of other providers who’ve also skinned monitoring tech up and offered it as a service, but this has always been more costly. This is the first service I’d come across that seems to be more cost-effective as well as removing the need for resources, time and deployment overhead. If other vendors are adapting their models now, then that is a good response, but it seems to me that Insentra were the first to go with a “cloud-first” model rather than concentrating on in-house deployment. YMMV, but this has been my experience over the last few years.