Civic News
Cybersecurity / Data / Technology

With extreme weather events on the rise, consider data when making disaster relief plans

As government agencies develop plans for cyber attacks, they to focus more on natural disaster relie planning, too, said Veeam Software Senior Director of Enterprise Strategy Jeff Reichard.

Increased natural disasters mean even more planning is necessary for data recovery, says Veeam Software. (Photo via Flickr user DiamondTDesign, used under public domain).

When Hurrican Ida swept through Louisiana earlier this summer, an unexpected victim was the National Archives and Records Administration (NARA). For nearly two weeks, the agency’s electronic records archive was offline following flooding from the storm.

But Jeff Reichard, senior director of enterprise strategy at Veeam Software, the parent company of Penn Quarter’s Veeam Government Solutions, noted that while multiple networks went down for such an extended period, there was a win in the books, as well.

“One of the cool things about the story is that they didn’t actually lose data, what they lost was network connectivity,” Reichard told Technical.ly. “They didn’t lose the records that they have custody over. They just stopped having the ability to accept new ones for a couple of weeks until the network connections got fixed.”

In the case of Ida, Reichard noted that the data center retained power, but it was the networks that were unable to function following the storm. It emphasized a shortcoming he often sees: as the climate changes and the potential for more natural disaster grows, agencies need to be paying attention to their disaster relief plans.

According to Reichard, while many local and state governments do have such plans currently, they lack the necessary depth and frequent testing. In the case of NARA and Ida, he said, the agency needed more focus on the infrastructure, not just data centers and power.

“Data is growing a lot,” Reichard said. “Everyone knows they should test their disaster recovery frequently…but in the real world that tends to go to the bottom of the priority list until the time comes to check the box for this quarter or this year, and then you do it in the minimum way that you can in order to check the box.”

Reichard hopes this looks like local government and even federal agency tests that take down the primary networks and see if backup options can suffice in the case of emergency. But when it comes to the data itself, while governments are good at determining the most critical data that needs protection and potential recoveries, he said it’s also important to have a full, comprehensive plan.

Especially he noted, with seemingly small pieces of data like home and cell numbers of contractors that entities might need in case of an emergency. It’s a relatively easy fix, but would likely still take up precious time in a disaster.

Plus, for state and local governments, an airtight plan is especially necessary. As one of the hardest-hit sectors by ransomware attacks over the past few years (in 2019, city governments in Baltimore and Philadelphia both suffered attacks), they need to be prepared for all scenarios.

“Recovering from a ransomware attack is basically an in-place disaster recovery exercise,” Reichard said. “Everything’s down, just like it would be if I had a hurricane or a tornado or a flood, but now I have to get it all back up and running with suspect data that I’ve got to cleanse before I put it back in place on hardware that may also be suspect.”

Jeff Reichard, Veeam Software (Courtesy photo).

This means that as agencies prepare for relief, while it’s ideal for agencies to have separate natural disaster and cyber attack plans, they can have the same general tactics and ideas.

Specific cases, like checking for bugs on hardware might not be too relevant in the case of a data center burning down in a wildfire, he noted, but all the other data orchestration tactics in place will be relevant. Plus, there’s plenty of overlap in the technical tools and in the way that agencies need to mobilize their people and their process, he added.

With this overlap, he hopes governments can see this as an ongoing and ever-developing issue that requires upkeep.

“It would be nice to see public sector IT make a transition away from the check-box, [disaster relief] test exercise to a more living [disaster relief] exercise,” Reichard said. “Because then it would actually have a better chance of keeping up with the way that applications change, because they do all the time.”

Especially, he added, with how high the stakes are in both cases. Just this week, the Federal Emergency Management Agency testified before Congress to say that weather-related disasters are on the rise.

“We are seeing an era where we’re going to be getting a lot more natural disasters than we’ve had in the past,” Reichard said. “And we’re also going to be getting, it looks like, a lot more general stresses on society with things like climate-related migration and stuff like that. So, business continuity is only going to get more important than it already is.”

Engagement

Join the conversation!

Find news, events, jobs and people who share your interests on Technical.ly's open community Slack

Trending

DC daily roundup: Inside UMCP's new ethical AI project; HBCU founder excellence; a big VC shutters MoCo office

DC daily roundup: Esports at Maryland rec center; High schoolers' brain algorithm; Power data centers with coal?

DC daily roundup: Tyto Athene's cross-DMV deal; Spirit owner sells to Accenture; meet 2GI's new cohort

The Washington Post is developing an AI-powered answer tool informed by its coverage

Technically Media