How do you measure your data-protection success? This is a question that has plagued many folks. Data-protection success could be measured by cost savings, peace of mind, recovery success, or the number of support tickets opened to achieve true data protection. Most likely it is a combination of all those items.
Here is one definition of success that I found interesting: not only is it about the speed in which systems can be brought online, but a lack of support calls in over a year would be a telling point as well. Now, I begin to wonder whether this refers to a “set and forget” situation. However, given how often systems are moved between clouds these days by those with the available technology, that can’t be the case.

This video talks about saving time, money, and frustration, and it offers a measure for success. Success is a big part of the next generation of data protection, the goals of which are to save time, money, and frustration, as well as to offer as many recovery options as necessary to protect your data. What is required in order to have a next-generation data protection product that will be successful now and in the future?

Next-Gen Data Protection Success Requirements

No matter how you measure success, and I do hope you have such a measurement, the next generation of data protection has the following requirements. These requirements focus on the recovery of applications, not just data. If I restore only data but it takes days to set up an important application, then that recovery would not be considered successful. The losses in terms of revenue and time would be very high.
There and back again

Our data must go there (perhaps to a cloud) and back again (to where the data once was). It may travel to different clouds on its journey, but eventually it needs to get back to us. Most tools will allow you to reverse replication or backup targets to place your data at the originating site when processing data protection within a hot site cloud or data center. Veeam, CloudVelox, Zerto, and others will do this.

Focus on recovery

Test your recovery often and with an automated method. The goal is to ensure that a recovery will be successful no matter where you place the backup (cloud, disk, tape, or elsewhere). Veeam started this movement to test recoveries within a dedicated test environment with its SureBackup technology. Most others will give a checksum or even an image of the booted VM so that you can verify whether it is correct. We need to improve the automation of recovery testing for the next generation of data protection.

Must determine dependencies

All next-generation backup solutions must be able to pick out an application from a set of physical, virtual, and cloud-based systems and use that information to determine what should be backed up and where. Not only do we need to determine dependencies on the front end, but the tools need to start analyzing the recovery or recovery test to determine if proper dependencies have been met to restore the entire application. In other words, we have the chance up front to capture the current state of an application via communication between components, and we have a chance on recovery to use the same techniques to capture the state of an application during the full boot cycle. Then, by running analytics, we can determine if all dependencies exist. There are no products that do both sides of this analysis today. These dependencies should be picked up by other tools as well: perhaps those that output Tosca graphs or ingest them to produce interconnecting dependencies.

Business continuity, not only disaster recovery at the cost of backup

Ensure recovery can happen within a short window. We want to ensure that business can continue even when power or cooling is out in our data center or even in the cloud we use. Backup recovery windows are shrinking, and we need to use improved mechanisms to ensure our data is readily available everywhere it needs to be for a speedy recovery of our business. Veeam, Vizioncore (now Dell), and PHD Virtual (now Unitrends) spearheaded a reduction in backup times by employing change block tracking, active block tracking, deduplication, and the like. These are now the norm for all modern backup tools. Tools that replicate to the cloud, like CloudVelox, Cloudtools, and HotLink, are good examples that concentrate on ensuring that the workload can launch in a hot cloud. Other tools require a data-protection end point to live within a cloud, such as Veeam, Zerto, and Quantum.

Business continuity for all workloads, not just mission-critical ones

Businesses may believe that only certain specified applications are mission-critical, but employees tend to think otherwise. If someone cannot do their job, then they are adversely impacted. The definition of “mission-critical” changes from job to job. We should by all means start with mission-critical backups and data protection, but eventually the entire business environment should be considered as a candidate for business continuity at an affordable cost. Replication tools from HotLink, CloudVelox, Veeam, Zerto, Quantum, and others provide appropriate solutions, but the cost in data transfer, storage, and cloud (or hot site) is still a bit high for everything.

Make use of the near-infinite ready spare capacity within an elastic cloud

Use of the cloud as one’s hot site is a growing trend in enterprises and small businesses today. The goal is to provide elasticity for recovery as needed. The cloud has capacity that our data centers may not. A cloud could be private or public, but in either case, it tends to have better capacity. Recovery should ensure that multiple clouds can be targeted as a place to recover as quickly as possible. CloudVelox and HotLink do this today. Veeam, Zerto, and others are available if the cloud owns a replication end point or if you can place one within the cloud.

No special web interfaces or server to manage

Data protection needs to be managed within the tools people use daily. Data protection is no longer “set and forget” but is an integral part of any deployment. Too many times, we see data protection using an entirely different interface that does not integrate into the tools we use on a day-to-day basis. Until the management is integrated or enough information is provided as alerts, it is not possible to know the state of your data protection within, say, vCenter or System Center, or your NOC. HotLink is part of vCenter, while Veeam embeds messages as notes on each VM. The goal is to know via alerts that there is a problem or that everything is doing well. If I have to dig for it, data protection becomes “set and forget.”

Works with my current hardware

There should be no requirement to buy more hardware to make data protection a success. Appliances must work everywhere: in the data center, cloud, hot site, and elsewhere. This does not mean you shouldn’t invest in hardware backup appliances. However, when you do, you need to ensure that they meet your next-gen data protection needs. Invest as needed, but do not think hardware is the only solution. Companies that sell hardware also sell software versions of their appliances, such as Quantum and Datto. Veeam, HotLink, CloudVelox, and others have always sold just software. Veeam integrates with more and more existing hardware.

Works with a cloud running a different hypervisor

Hypervisor-agnostic data protection is a modern-day requirement. There should be no need for like-to-like backup or restoration. In reality, the cloud and the hypervisor should not matter. This reinforces the need for cloud-to-cloud backups and replication. Do not always trust one cloud. The real issue, however, is that many clouds use different hypervisors, which implies different drivers and virtual disk formats. To be hypervisor-agnostic implies that should not matter. Today, it does. Those that translate your VMs for you, like CloudVelox and HotLink, allow for cloud-to-cloud data protection without your needing to worry about the hypervisor.

Does not require the cloud target to “hook” into the cloud at hypervisor levels

Any data-protection tool should speak well-known APIs like S3, EBS, vSphere SDK, Hyper-V SDK, and more. In addition, if those same tools are replicating to a cloud, they should manage a mirror, copy, and backup of the data either to another cloud or to the originator, if the orginator’s and cloud’s copies are not the same. This is part of the “there and back again” data-protection mentality. However, there should be no requirement within the cloud to hook into any aspect of that cloud’s underlying hypervisor in order to work. This is where the new brand of data-protection tools, such as CloudVelox and HotLink DR, excels, while tools that use hypervisor-specific underpinnings, such as VMware vSCSI API, fail. The underlying hypervisor should not matter in the next generation of data protection.

Encrypt/sign all traffic and storage

Data confidentiality and integrity is a must. One way to achieve this is to use encryption; however, checksums, digital signatures, and other mechanisms are also suggested. Not all data should be encrypted (only that data which by policy requires encryption should be), but all data should be digitally signed to ensure the integrity of the data. Unfortunately, most data-protection tools use only encryption, as it is easier to encrypt everything than it is to govern this via policy. Public data, for example, does not need encryption, but it does need a digital signature to ensure it has not been changed by an unknown third party.

Restore into the cloud or anywhere

This is a restore-anywhere necessity. I do not know where my data will end up, but it is important to be able to use it anywhere required. Those tools that use clouds as pure repositories are very good at restoring to anywhere; however, the first place to restore would be into the cloud in which I set up the repository. Why? because it is very bad form to attempt a restore and end up having to transfer terabytes of data in a very short window of time. Instead, restore quickly, but ensure your data can get anywhere as needed. This ends up proliferating your data across clouds and data centers, but it has the added value of restoring to anywhere. Due to this proliferation, it is now required that you have control and knowledge of where your data is. Data protection requires a well thought out data policy: “this data can go to this cloud, but that data cannot,” or “this data must be encrypted in location A but not in location B,” and the list goes on.

Final Thoughts

A new breed of data-protection tool is on the market, and it works inherently in the cloud world. It does so particularly with Amazon, but has its roots in the data center. Such tools include CloudVelox, HotLink, and Cloudtools, among others. However, data-protection success depends not just on a successful backup, but also on a successful recovery. It is a must that your data protection be well thought out, part of everyday activities for those controlling an environment, and application-centric; it must cross cloud boundaries and provide “there and back again” protection.
Data protection is not an all-or-nothing proposal: it is a mix of systems, clouds, environments, and data classifications that often require very different applied policies. We still do not know what constitutes an application, but we are getting closer, as there is a new breed of tools that can output and read in Tosca graphs created from blueprints of applications. Infrascale, for example, can input and use a Tosca graph  for backup acceleration but does not use it as an integral part of data protection. Currently, I do not see many companies meeting the full range of requirements for next-generation data protection.
It is no longer enough simply to back up your data. Your data needs to be readily available and usable by the intended application as rapidly as possible.
How do you measure your data-protection success? What are your requirements for next-generation data protection?

2 replies on “Measuring Data Protection Success”

    1. It can be subjective, but what really matters is whether or not the complete data can be restored in as short a period of time as possible with minimal issues with the results. That can be measured.

Comments are closed.