Gartner claims that the average cost of network downtime is $5,600 per minute, which is $336,000 per hour. In order to prevent costly downtime, organizations should have a disaster recovery (DR) plan in place that lays out what to do in the event of outages or attacks.

“When business applications and their underlying data are no longer available, businesses stop functioning,” said Stanley Zaffos, a senior vice president at storage company Infinidat and former analyst at Gartner. “Stop functioning long enough, and you don’t generate income to sustain your business.”

Disaster recovery has always been important, but according to W. Curtis Preston, chief technologist at disaster recovery-as-a-service company Druva, the advent of ransomware has made it even more important. Attackers have broadened their targets from trying to put viruses on personal laptops to steal personal information, to attacking mission-critical servers, he explained.

Attackers can essentially take a hospital or even a whole city down by taking down their servers, Preston said. A 2018 survey from SentinelOne claims that an average of 56 percent of organizations had suffered from a ransomware attack within 12 months of the survey.

RELATED CONTENT:
How to protect your organization when business services fail to due to IT cyberattacks
IT operations monitoring can help prevent business services failures

“Because ransomware has become so prevalent and when companies get it, they get down for weeks or whatever, DR has become much more important today than it has in any other situations,” Preston said.

In addition, in this age where major data breaches have become almost commonplace, consumers are paying even more attention to how the companies who are handling their information are protecting that data. This is evidenced by the emergence of data protection legislation such as the GDPR and the upcoming California Consumer Privacy Act.

Having a disaster recovery plan in place won’t be able to stop criminals from stealing data if they get in your servers, but if a hack takes down your systems for an extended period of time, consumers might be less likely to trust your organization to store their data. Or they may become fed up with not being able to access the services a company offers and go to a competitor.

So not only could downtime cost over $300,000 per hour, it could cause indirect costs such as a loss of customers. According to a survey from RAND, 11 percent of survey respondents stopped interacting with an organization following a breach.

Traditionally, to prepare for the event of a disaster, like a hack or an outage, organizations had a “hot site,” which is a secondary location where all of their data is being replicated and stored, Preston explained. In these scenarios, the secondary location has up-to-date information from the main site, and when a disaster occurs, the IT team can easily spin up the second site and have it take over, he said. If ransomware infects data on the main server, organizations can just restore data from this secondary site and be up and running again.

According to Preston, two of the most important metrics when declaring a disaster are RTO (Recovery Time Objective) and RPO (Recovery Point Objective). RTO is the amount of time that is takes from declaration of a disaster to the time service is restored. RPO is the agreed-upon amount of data that it is acceptable to lose, he said. In general, most companies’ recovery objectives are met with an RTO of 15 to 20 minutes and an RPO of one hour’s worth of data, Preston explained. This means that they’ll be back up and running in 15 to 20 minutes and only lose the last hour of data that hadn’t been backed up yet at the time of the disaster.

In the cloud era, organizations can now leverage the cloud during disasters. “The beautiful thing about the cloud is that you can just snap your fingers and you have a thousand servers and you have a hundred terabytes of data, whatever it is you need,” said Preston. “There’s always excess capacity available for everybody, and you can also set it up so that when [there is] a disaster, you would instead of spinning up in a nearby business, you can actually spin up in another region so you can actually do this outside of whatever disaster happened to you.”

John Samuel, executive vice president of IT company CGS, noted that it’s important that organizations don’t automatically assume that the public cloud will provide disaster recovery. “This is not the case,” he said. “Companies still need proper disaster recovery and business continuity plans to lay out what would happen should there be data loss resulting from security issues or cloud provider outages.”

It should be noted that disaster recovery shouldn’t be confused with business continuity planning. The two terms are often used interchangeably, but according to Samuel, they’re quite different. According to Samuel, business continuity planning is “the ability of an organization to maintain essential business functions during, as well as after, a disaster has occurred.” Disaster recovery planning is a subset of business continuity planning, Samuel explained.

It’s not enough to just draft up a plan and forget about it, though. “A plan is worthless if the team does not know how to execute it effectively – and during a disaster is not the time to try it out for the first time,” said Mike Fuhrman, chief product officer of Flexential, a colocation data center.

Zaffos recommended that organizations update and test disaster recovery plans whenever a new mission-critical application is brought online, after capacity upgrades, or after the addition of new server, networking, or storage equipment. At the bare minimum, plans should be tested at least semiannually. “Without regular testing, it is fair to argue that a D/R plan is more hope than capability,” he said.

Mark Jaggers, senior director analyst at Gartner, said that “exercise” is a more appropriate term than “test.” This is because these exercises aren’t something that can be passed or failed; they are meant to build confidence and strength in the ability to execute plans.

Jaggers said that the people potentially responsible for bringing systems back up should be the ones included in the exercise. He also recommended doing these exercises without a full team. “The idea of a disaster is that you don’t know when it’s going to happen or what it’s going to affect,” he said. “You also don’t know who is going to be there to respond. So you may have to have a database administrator take on recovery of an email environment. So your documentation, your planning should account for people who are knowledgeable and capable and have the expertise and are not necessarily the subject matter experts or even day to day administrators in any particular area.”

According to Kevin McNulty, a director at critical event management solutions provider Everbridge, organizations often ignore exercising their disaster recovery plans under the assumption that tests can be costly and time-consuming. He recommends finding ways to incorporate disaster recovery exercises into their regular maintenance updates.

As important as disaster recovery is, the traditional method of having a secondary disaster recovery site is expensive. Today, things like disaster recovery-as-a-service make it easier and less expensive, explained Jaggers. Gartner defines disaster recovery-as-a-service as “a productized service offering in which the provider manages server image and production data replication to the cloud, disaster recovery run book creation, automated server recovery within the cloud, automated server failback from the cloud, and network element and functionality configuration, as needed.”

Preston believes that in the future, most companies will be doing disaster recovery in the cloud. “It just makes sense,” he said. “You need all of this hardware and software and storage and all of these resources available at a moments notice, but you don’t want to pay for them until you need them. The public cloud is simply the most sensible way to do DR. If you are the kind of company where money is no object type of company, where downtime costs you million dollars a minute, then maybe you can justify doing it the old expensive way. But for the vast majority of companies, I think they will do DR the way that we do it.”