Backup, disaster recovery, and business continuity have changed quite a bit over the years, and they will continue to change into the future as more capability, analytics, and functionality are added to the general family of data protection tools. As we launch ourselves into the clouds, we need to perhaps rethink how we do data protection, what tools are available for data protection, and how to use our older tools to accomplish the same goals. We need an integrated data protection plan that not only accounts for cloud or data center failures but also accounts for the need to run within the cloud. There is always the need to get your data there and back again.  
Solving this problem has been an interesting quest for many. I have seen parts of it solved by many, but few have the entire picture covered. Here is an entire picture of a possible backup scenario:

Backup: Past, Present, and Future
Backup: Past, Present, and Future
This diagram shows the past, present, and future of data protection, from the backup of the past (the solid lines), to the backup and replication of the present (the long dotted lines, as well as the data store to backup store circles), to the business continuity of the future (the short dashed lines from backup store to cloud). Not just one, but all of these methods may need to be employed within a hybrid cloud. Why?

Backup Past, Present, and Future

Let’s look at each method in some detail:
The Past, or Traditional in Guest Backup: Traditional backup is still used today for physical systems and certain guests on hypervisors outside your control or in which you wish for a finer granularity of backup outside of the full disk. Tools from Symantec (Backup Exec, NetBackup), EMC (Avamar), CommVault (Simpana), HP (Data Protector), Dell (AppAssure), Unitrends, Datto, and others work with this method. They may or may not contain the other methods, but all of these tools have been around for many years. This is what I call the traditional data protection approach. The backup is made through some backup proxy that drops the data onto tape or a backup store (HP StoreOnce, EMC Data Domain, etc.) for final storage or online archival.
The Present, or Virtualization Backup: Virtualization backup understands the hypervisor, and as such, its backup tools interact with the hypervisor to gain access to the virtual machine disk files, or even to the data stores on which they reside. Using several techniques, these tools communicate directly with the virtual disk layers within the hypervisor. They can reduce the overall bytes copied, as the hypervisor is authoritative about items written from within virtual disks. Tools from Veeam, Unitrends, Symantec, EMC, VMware, Altaro, Quantum, and others all fit in this space for various hypervisors. However, if hypervisor support is missing, such as for KVM or Xen, you are forced to fall back to traditional approaches.
The Present, or Virtualization Replication: Virtualization replication is not quite backup and is used more as a business continuity device. Data is replicated to another location, such as a cloud, using hypervisor-aware replication methods. Those plug in once more to the hypervisor at layers within the virtual disk transport layers to copy data as it flows from a VM to the virtual disk, or even from the virtual disk to the VM. These tools replicate either from site to site or from site to cloud. Now, some tools also include replication, but that is done by the backup proxy, such as within Veeam, HotLink, Datto, and other products. Hypervisor-aware replication is provided by Zerto (for VMware vSphere) and by VMware’s Site Recovery Manager (SRM) replication.
The present also includes replication from data store to data store (storage snapshots) as well as replication from backup stores to other backup stores within the same site, different sites, or clouds. This form of replication is generally considered to be hardware-based replication. However, when you replicate to the cloud, the cloud is usually running a software version of the hardware device. It is generally not important how the replication takes place, just that the data is placed within a cloud for easy retrieval as necessary, including the easy standing up of virtual machines to run within the target clouds. Zerto, HotLink, Datto, and others do this, and they call it either replication receiver clouds or Disaster Recovery as a Service (DRaaS). Unfortunately, this is not everything.
There are lots of issues when you stand up systems within a cloud from a replicated source, unless the network is also replicated over into the cloud. This is a crucial component, automated only by a few tools, such as HotLink. Without the network, no devices are available; as such, you end up with the need to manage DNS, IP addresses, tunnels, etc. by hand to get access to your cloud-based VMs. If this is a by-hand process (as is the case with the majority of tools), there is a need for a massive amount of testing and automation to be performed by the backup, virtualization, and network administrators. This implies that a restoration will not be quick. It could also be handled by having a managed cloud (such as Datto) manage networking and the running of replicated VMs. Further, certain clouds require you to pre-create these networks (such as VMware vCloud Hybrid Service [vCHS]) so that when you do replicate data, it comes up as expected. This could add quite a bit of up-front work to the entire business continuity process. In either case, documentation could be limited.
The Future, or There and Back Again: The real goal for business continuity is to use the resources available to create a robust backup architecture that uses what is necessary (in guest or hypervisor based) not only to place your data into a hot site, cloud, or some other location, but also to allow the data to be replicated back to an existing or new location once the disaster no longer hinders the business. Such a disaster could be the threat of a hurricane, where we could enable replication with enough time to get our data to the other location, run, and then pull it back when the hurricane has passed. The goal in this case is to use the appropriate means to move data around your hybrid cloud and always bring it back as necessary. Such tools must be able to provide reversible replication (Datto, VMware SRM Replication, HotLink) back to the data stores within the recovery site after the disaster is repaired. Why would you do this? Mainly because currently, not everyone runs 100% within the cloud, and not everyone wants to, but for an emergency or planned outage (say, a data center move) running in the cloud or a hot site is warranted.

Concluding Thoughts

Backup is not really where it is at these days; it is just a tool to get data from one location to another. The tools used in backup can also be used to perform replication and reverse replication, not only to get your data to a new location (cloud, hot site, etc.) but to get your data back again. Why is this required? Because ultimately, the data owner is responsible for the data, not anyone else; there is no way to offload that ultimate responsibility. However, you do not need to rip and replace your tools: just use them as necessary. Inside a cloud, for example, traditional backup approaches are often warranted, while an on-site backup store may be required as well. The real question in my mind is, “How do you get your data to a new location, such as a cloud, and how do you get your data back again? What works for you?”