In the past, I have written about the next generation of data protection, which combines analytics with broader data and holistic system protection into one easy-to-use product (or set of products). The goal is to take disaster recovery to the future, when we will be able to restore and test our restores of not just our data, but also the systems required to make that data accessible, including all networking and security constructs. If you were to have a massive disaster, could your disaster recovery techniques restore your entire environment at just a push of a button? Does your disaster recovery testing feed back into analytics to determine what needs to change to make this a reality? 
Regardless of our goals for the future, today data protection comes down to what is available now, and from where. No single vendor has everything, but  many vendors are on the proper path. There are several approaches to the problem. One is to buy a single solution that may eventually get there; another is to buy multiple solutions and join them together using scripting via their APIs to achieve similar results.

Next-Generation Backup

Quite a few vendors are working to back up an increasing amount of the environment. The requirements for doing this are as follows with my original names at the end of each requirement:

  1. Back up more than just the data. We need to back up not only the data but also the systems required to make the data available, such as domain controllers, database servers, deployment servers, etc. (Automated Appliction Detection and Automated Dependency Detection)
  2. Back up the network. We need to back up not only the data and the system dependencies, but also the networking constructs that allow interoperation and presentation of the data.
  3. Back up security. We also need to back up the security contexts that surround not just data and system dependencies, but also network components. This includes firewall rules, microsegmentation rules, and load balancer rules, as well as per datum, VM, and system policies.
  4. Test our backups. We need to test our backups regularly to ensure that in any of the varying degrees of disaster, we can recover all that we need in a timely fashion. These needs could comprise anything from a single file, an application, or a failover to a new site, to a complete restore into an entirely different entity, such as a cloud. (Automated Disaster Recovery Testing)
  5. Automated feedback. We need feedback from the recovery, automated tests, backups, and dependency determination tasks to get to the front-end decision makers in order to improve the disaster recovery process and position, as well as the overall recovery. (Automated Detection of Issues)
  6. Cross-system recovery. We need recovery to work between hypervisors, clouds, and disparate environments. This includes problem determination as you migrate data between underlying hypervisors. (Automated IP Address adjustments during recovery and A big red button)
  7. Visibility. We need visibility not only into the data but into the success or failure of a recovery.
  8. There-and-back-again backups. We need the ability to get our data to a remote location and back to the data center or cloud (or a replacement) where the data originally lived. (Backup to and from the cloud)

If we could put all of these features together, the next generation of disaster recovery would appear. Our data, systems, networks, and security would be available anywhere at any time as needed, without out having to worry about anything more than pushing the big red button. Human intervention would be minimized. Therefore, human error would be removed.

Next-Generation Tools

A number of companies have features that touch upon various aspects of the above requirements. Here is a short list of companies that concentrate on data availability using next-generation requirements:

  • HotLink covers backup of the network and security context via mapping to Amazon security groups, testing backups, cross-system recovery, and there-and-back-again backups, as well as offering visibility into the backup results between vSphere and Amazon. HotLink DR Express is currently limited to just vSphere and Amazon.
  • Veeam covers data availability through visibility into backups and results, automated testing, and there-and-back-again backups. In addition, while it does not have cross-system recovery, it can back up multiple types of systems.
  • DataGravity is unique in that it has massive visibility into the data, making it easy to search. This visibility can lead to improved dependency checking, cross-system recovery, and there-and-back-again backups. However, to do all of these currently, you need to use third-party backup tools while relying on DataGravity to provide visibility, including visibility into the security context surrounding the data.
  • Unitrends provides cross-system recovery and testing using ReliableDR for well-known applications, yet application determination is by hand. Unitrends DRaaS (Disaster Recovery as a Service) in effect creates and maintains the run-book for each virtual machine and application as a part of this service. Unitrends also maintains the networking for any VMware vApp it backs up.
  • Vision Solutions provides cross-system recovery via Double-Take MOVE.
  • CommVault provides cross-system recovery of just the data.
  • Datto provides cross-system recovery and creates run-books by Datto employees who perform the restore in the cloud.
  • Continuity Software provides analytics about data availability with scriptable means by which to feed that data back into various tools.

All of these tools concentrate on providing availability and a start at understanding dependencies (workload, network, security) and the data in use. With greater visibility, it is possible to add analytics and meet all the requirements of next-generation data protection.