Civic News
AI / Federal government / Legal / Startups

AI companies say they’re actually looking forward to government regulation in the form of a new safety consortium

Many are already doing work similar to what NIST is intended to accomplish, founders note — so collaboration could help get everyone on the same page.

The entrance sign at NIST's Gaithersburg, Maryland campus. (Courtesy NIST/J. Stoughton)
Update: This story has been updated to clarify that Gaurav Pal supported an early US government cloud project in 2009, outside of his involvement with stackArmor. (2/28/2024, 12:09 p.m.)

For the leadership team at Backpack Healthcare, a pediatric and family mental health care provider incorporating AI into its day-to-day operations, safety and security are crucial.

Backpack Healthcare uses AI in several ways, such as creating treatment plans for patients. This saves time for the provider and maintains consistency for the patient, Prashanth Brahmandam, the company’s chief technology officer, explained. But with this innovation comes room for security concerns, he said. Guidelines need to be established so other providers can use the technology to its full potential, safely.

“We need to make sure that it is safe, but there’s no standards and regulations or best practices,” Brahmandam said.

But now, there’s a federal push to create rules for using AI, and Backpack Healthcare is a part of the endeavor.

In an effort to guide federal regulations for artificial intelligence development and use, the US government has created a consortium of AI developers and users, academics, civil society organizations and government entities to set standards.

The Gaithersburg, Maryland-based National Institute of Standards and Technology (NIST), under the Department of Commerce, announced the creation of the U.S. AI Safety Institute Consortium on Feb. 8. The consortium was designed to develop science-based safety standards for AI operation and design.

It’s made up of more than 200 companies and organizations, including top entities like Adobe, Meta, OpenAI, RAND Corporation, Booz Allen Hamilton, Deloitte and Wells Fargo & Company. There are also several startups in the cohort, with many based in the DC area.

Consortium members will help craft guidance for AI safety. For example, they will identify AI capabilities and potential causes of harm. They will also create methods for successful red teaming and privacy preservation, develop rules for watermarking AI-generated content and study how people interact with AI in different situations.

This announcement comes after President Joe Biden issued an executive order in October focusing on AI safety and security.

“President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do,” Secretary of Commerce Gina Raimondo said in a press release.

NIST also created the AI Risk Management Framework, released in January 2023, which is a voluntary system meant to reduce risks associated with AI.

Elizabeth Kelly, who will lead the AI Safety Institute as its director, has been a part of spearheading AI security efforts by the current administration, specifically in tech policy and financial regulation. Elham Tabassi was also named the institute’s chief technology officer of the institute. She led the development of the NIST AI Risk Management Framework and was named one of the most influential people in AI by Time Magazine in 2023.

‘Don’t reinvent the wheel’

For Andrew Gamino-Cheong, one of the founders of the startup Trustible, assessing risks and guaranteeing protection in the AI world is part of his daily work. Trustible, a DC-based technology provider and 2024 DC RealLIST Startup, helps organizations and companies manage AI, build trust and reduce risk.

When Gamino-Cheong was tracking the beginning of AI regulation with the proposal of the EU AI Act in 2021, he realized how “complicated” such a policy could get. He wanted to create a platform that could help guide organizations to use AI responsibly and comply with a mix of different laws and frameworks, he said. He’s preparing to bring those experiences to NIST’s consortium.

“We’re looking forward to collaborating and sharing some of our own research and insights, especially since we actually, on a regular basis, are helping companies adopt the NIST AI Risk Management Framework,” Gamino-Cheong said.

Similarly, stackArmor, a technology security services company based in Tysons, Virginia, helped clients protect data and information when cloud programs first boomed. Now, stackArmor Founder Gaurav Pal said he’s adding AI security and safety to his company’s products.

Pal supported one of the first US government cloud projects back in 2009. There are many rules and regulations surrounding cloud services at the government level, Pal said. He’s hoping some of the lessons learned in that development, like balancing security while exploring the capabilities of the new technology, are applied to AI safety guidelines created through the consortium.

“There are lots of lessons learned,” Pal said. “What we’re suggesting is, ‘Hey, don’t reinvent the wheel.’”

For health providers, privacy is key

Backpack Healthcare, a member of the cohort, is bringing a unique perspective on AI safety.

The Elkridge, Maryland-based provider, which was a 2023 Baltimore RealLIST Startup, uses AI in scheduling and crafting treatment plans. CTO Brahmandam said a health provider perspective is essential in conversations about AI regulation — especially concerning patient privacy.

“It aligns with our vision and what we want to do: Making sure that AI is safe, especially in our field,” Brahmandam said.

Brahmandam is most concerned about patient safety when it comes to using AI in medical contexts. Backpack Healthcare has established an internal AI Governance Committee to ensure data is captured and stored safely.

One of the biggest draws of joining the consortium was this need for safety when using AI, especially when it comes to patient privacy, Brahmandam said.

“There are a lot of things we get from the front line of providing care to patients that we can contribute back to the consortium to set standards for how patients should be treated and how we can ensure that AI algorithms are safe for people,” he said.

Who’s part of the consortium?

Here are more of the companies and organizations throughout’s markets participating in NIST’s AI Safety Institute Consortium:

DC area

  • Trustible
  • stackArmor
  • EqualAI
  • Center for Democracy and Technology
  • Center for Security and Emerging Technologies at Georgetown University
  • Gryphon Scientific


  • Backpack Healthcare
  • Alliance for Artificial Intelligence in Healthcare
  • Johns Hopkins University


  • Benefits Data Trust
  • Drexel University
  • Vanguard


  • Preamble
  • Carnegie Mellon University
  • The Carl G. Grefenstette Center for Ethics in Science, Technology and Law at Duquesne University
  • University of Pittsburgh
Companies: Backpack Healthcare / National Institute of Standards and Technology

Join the conversation!

Find news, events, jobs and people who share your interests on's open community Slack


‘Shark Tank’ reruns and mentorship prepared Baltimore entrepreneur for her primetime moment

Delaware daily roundup: DE in DC for 'Communities in Action'; diversifying the coffee supply chain; Invista's future

Delaware daily roundup: Where to cowork in 2024; Intertrust Group rebrands; the Visitor Bureau's new website

Philly daily roundup: Marketing life sciences; $819M in Q1 VC; Fortune's best places to work

Technically Media