Civic News

With extreme weather events on the rise, consider data when making disaster relief plans

As government agencies develop plans for cyber attacks, they to focus more on natural disaster relie planning, too, said Veeam Software Senior Director of Enterprise Strategy Jeff Reichard.

Increased natural disasters mean even more planning is necessary for data recovery, says Veeam Software. (Photo via Flickr user DiamondTDesign, used under public domain).

When Hurrican Ida swept through Louisiana earlier this summer, an unexpected victim was the National Archives and Records Administration (NARA). For nearly two weeks, the agency’s electronic records archive was offline following flooding from the storm.

But Jeff Reichard, senior director of enterprise strategy at Veeam Software, the parent company of Penn Quarter’s Veeam Government Solutions, noted that while multiple networks went down for such an extended period, there was a win in the books, as well.

“One of the cool things about the story is that they didn’t actually lose data, what they lost was network connectivity,” Reichard told Technical.ly. “They didn’t lose the records that they have custody over. They just stopped having the ability to accept new ones for a couple of weeks until the network connections got fixed.”

In the case of Ida, Reichard noted that the data center retained power, but it was the networks that were unable to function following the storm. It emphasized a shortcoming he often sees: as the climate changes and the potential for more natural disaster grows, agencies need to be paying attention to their disaster relief plans.

According to Reichard, while many local and state governments do have such plans currently, they lack the necessary depth and frequent testing. In the case of NARA and Ida, he said, the agency needed more focus on the infrastructure, not just data centers and power.

“Data is growing a lot,” Reichard said. “Everyone knows they should test their disaster recovery frequently…but in the real world that tends to go to the bottom of the priority list until the time comes to check the box for this quarter or this year, and then you do it in the minimum way that you can in order to check the box.”

Reichard hopes this looks like local government and even federal agency tests that take down the primary networks and see if backup options can suffice in the case of emergency. But when it comes to the data itself, while governments are good at determining the most critical data that needs protection and potential recoveries, he said it’s also important to have a full, comprehensive plan.

Especially he noted, with seemingly small pieces of data like home and cell numbers of contractors that entities might need in case of an emergency. It’s a relatively easy fix, but would likely still take up precious time in a disaster.

Plus, for state and local governments, an airtight plan is especially necessary. As one of the hardest-hit sectors by ransomware attacks over the past few years (in 2019, city governments in Baltimore and Philadelphia both suffered attacks), they need to be prepared for all scenarios.

“Recovering from a ransomware attack is basically an in-place disaster recovery exercise,” Reichard said. “Everything’s down, just like it would be if I had a hurricane or a tornado or a flood, but now I have to get it all back up and running with suspect data that I’ve got to cleanse before I put it back in place on hardware that may also be suspect.”

Jeff Reichard, Veeam Software (Courtesy photo).

This means that as agencies prepare for relief, while it’s ideal for agencies to have separate natural disaster and cyber attack plans, they can have the same general tactics and ideas.

Specific cases, like checking for bugs on hardware might not be too relevant in the case of a data center burning down in a wildfire, he noted, but all the other data orchestration tactics in place will be relevant. Plus, there’s plenty of overlap in the technical tools and in the way that agencies need to mobilize their people and their process, he added.

With this overlap, he hopes governments can see this as an ongoing and ever-developing issue that requires upkeep.

“It would be nice to see public sector IT make a transition away from the check-box, [disaster relief] test exercise to a more living [disaster relief] exercise,” Reichard said. “Because then it would actually have a better chance of keeping up with the way that applications change, because they do all the time.”

Especially, he added, with how high the stakes are in both cases. Just this week, the Federal Emergency Management Agency testified before Congress to say that weather-related disasters are on the rise.

“We are seeing an era where we’re going to be getting a lot more natural disasters than we’ve had in the past,” Reichard said. “And we’re also going to be getting, it looks like, a lot more general stresses on society with things like climate-related migration and stuff like that. So, business continuity is only going to get more important than it already is.”

Before you go...

Please consider supporting Technical.ly to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.

Our services Preferred partners The journalism fund
Engagement

Join our growing Slack community

Join 5,000 tech professionals and entrepreneurs in our community Slack today!

Trending

Trump may kill the CHIPS and Science Act. Here’s what that means for your community.

After nearly a decade, the federal program for immigrant entrepreneurs is finally working

Block the bots or feed them facts? How Technical.ly uses AI in journalism

This Week in Jobs: Sweeten your career with these 31 open tech roles

Technically Media