In my last article, Priorities of Uninterrupted Data Access, I discussed the IDG survey that reported a sizeable difference in the percentage of executives (50%) and IT managers and directors (90%) who are concerned about uninterrupted access to company data. This spread has left me speculating about what might be behind the different attitudes and concerns.

The first question that comes to mind when I think about the spread is “Are the executives, managers, and directors all speaking the same language when it comes to the technology?” I mean, there must be some kind of breakdown in communications, expectations, or both for the spread to be that wide.

Business continuity, disaster recovery, and uninterrupted data access are all derived from the same concept: providing access to corporate data no matter what kind of catastrophe may hinder that access. It was once a long, drawn-out process for most of us to ship offsite recovery media to a remote site and then start the process to restore and recover the infrastructure. It could literally take multiple days to achieve. But my, how times have changed with the introduction of virtualization and cloud computing, which facilitate much faster recoveries for all businesses, and not just the larger enterprises. Replication of data before virtualization and the cloud used to be one of the most sizable investments a company would make for recovery. Many companies just did not have the budget available to invest in secondary standby equipment and the network infrastructure needed to receive their replicated data.

Today, on the other hand, there are several options available, from utilizing native replication tools and software from the virtualization layer and/or storage level, to utilizing the public cloud as part of the plan, to taking advantage of any of the Disaster Recovery as a Service (DRaaS) options available from third-party vendors. The availability of so many options and services might be one of the reasons why executives have less concern about being able to provide uninterrupted access to data.

That said, the reality is that although several choices and even more methods are available for business continuity, there are limitations to be accounted for in the design. One limitation is the bandwidth available in the data center for replicating data to a remote site. When available bandwidth is too limited, sacrifices need to be made.

One such sacrifice involves the replication delay for the sync. What is an acceptable delay? Is a one-minute sync delay acceptable? What about a five-minute delay? Along the same lines, which servers are the most important to recover first? Which are not as important and can be expected to be brought up in another wave of restores?

Choosing a service or a method for providing uninterrupted access may seem easy, but the details will determine your success. The attention paid to those details could be one of the biggest reasons for the difference between executives’ and IT managers’ levels of concern. IT managers and directors are more concerned with all the little details that can bring about success on demand. When you chose your primary application, did you take into account all the back-end servers and processes that make up the application? Did you take into account DNS for a recovery of your external applications? It could take something like twenty-four hours for DNS to fully replicate and respond to requests. These are just a few of the details that give IT directors and managers the level of concern that many executives do not have. After all, that duty is delegated to the IT directors and managers. Isn’t that what is expected? I believe one thing the spread in concern levels demonstrates is the confidence executives have that their managers are able to deliver uninterrupted data access.