For the leadership team at Backpack Healthcare, a pediatric and family mental health care provider incorporating AI into its day-to-day operations, safety and security are crucial.
Backpack Healthcare uses AI in several ways, such as creating treatment plans for patients. This saves time for the provider and maintains consistency for the patient, Prashanth Brahmandam, the company’s chief technology officer, explained. But with this innovation comes room for security concerns, he said. Guidelines need to be established so other providers can use the technology to its full potential, safely.
“We need to make sure that it is safe, but there’s no standards and regulations or best practices,” Brahmandam said.
But now, there’s a federal push to create rules for using AI, and Backpack Healthcare is a part of the endeavor.
In an effort to guide federal regulations for artificial intelligence development and use, the US government has created a consortium of AI developers and users, academics, civil society organizations and government entities to set standards.
The Gaithersburg, Maryland-based National Institute of Standards and Technology (NIST), under the Department of Commerce, announced the creation of the U.S. AI Safety Institute Consortium on Feb. 8. The consortium was designed to develop science-based safety standards for AI operation and design.
It’s made up of more than 200 companies and organizations, including top entities like Adobe, Meta, OpenAI, RAND Corporation, Booz Allen Hamilton, Deloitte and Wells Fargo & Company. There are also several startups in the cohort, with many based in the DC area.
Consortium members will help craft guidance for AI safety. For example, they will identify AI capabilities and potential causes of harm. They will also create methods for successful red teaming and privacy preservation, develop rules for watermarking AI-generated content and study how people interact with AI in different situations.
This announcement comes after President Joe Biden issued an executive order in October focusing on AI safety and security.
“President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do,” Secretary of Commerce Gina Raimondo said in a press release.
NIST also created the AI Risk Management Framework, released in January 2023, which is a voluntary system meant to reduce risks associated with AI.
Elizabeth Kelly, who will lead the AI Safety Institute as its director, has been a part of spearheading AI security efforts by the current administration, specifically in tech policy and financial regulation. Elham Tabassi was also named the institute’s chief technology officer of the institute. She led the development of the NIST AI Risk Management Framework and was named one of the most influential people in AI by Time Magazine in 2023.
‘Don’t reinvent the wheel’
For Andrew Gamino-Cheong, one of the founders of the startup Trustible, assessing risks and guaranteeing protection in the AI world is part of his daily work. Trustible, a DC-based technology provider and 2024 DC RealLIST Startup, helps organizations and companies manage AI, build trust and reduce risk.
When Gamino-Cheong was tracking the beginning of AI regulation with the proposal of the EU AI Act in 2021, he realized how “complicated” such a policy could get. He wanted to create a platform that could help guide organizations to use AI responsibly and comply with a mix of different laws and frameworks, he said. He’s preparing to bring those experiences to NIST’s consortium.
“We’re looking forward to collaborating and sharing some of our own research and insights, especially since we actually, on a regular basis, are helping companies adopt the NIST AI Risk Management Framework,” Gamino-Cheong said.
Similarly, stackArmor, a technology security services company based in Tysons, Virginia, helped clients protect data and information when cloud programs first boomed. Now, stackArmor Founder Gaurav Pal said he’s adding AI security and safety to his company’s products.
Pal supported one of the first US government cloud projects back in 2009. There are many rules and regulations surrounding cloud services at the government level, Pal said. He’s hoping some of the lessons learned in that development, like balancing security while exploring the capabilities of the new technology, are applied to AI safety guidelines created through the consortium.
“There are lots of lessons learned,” Pal said. “What we’re suggesting is, ‘Hey, don’t reinvent the wheel.’”
For health providers, privacy is key
Backpack Healthcare, a member of the cohort, is bringing a unique perspective on AI safety.
The Elkridge, Maryland-based provider, which was a 2023 Baltimore RealLIST Startup, uses AI in scheduling and crafting treatment plans. CTO Brahmandam said a health provider perspective is essential in conversations about AI regulation — especially concerning patient privacy.
“It aligns with our vision and what we want to do: Making sure that AI is safe, especially in our field,” Brahmandam said.
Brahmandam is most concerned about patient safety when it comes to using AI in medical contexts. Backpack Healthcare has established an internal AI Governance Committee to ensure data is captured and stored safely.
One of the biggest draws of joining the consortium was this need for safety when using AI, especially when it comes to patient privacy, Brahmandam said.
“There are a lot of things we get from the front line of providing care to patients that we can contribute back to the consortium to set standards for how patients should be treated and how we can ensure that AI algorithms are safe for people,” he said.
Who’s part of the consortium?
Here are more of the companies and organizations throughout Technical.ly’s markets participating in NIST’s AI Safety Institute Consortium:
DC area
- Trustible
- stackArmor
- EqualAI
- Center for Democracy and Technology
- Center for Security and Emerging Technologies at Georgetown University
- Gryphon Scientific
Baltimore
- Backpack Healthcare
- Alliance for Artificial Intelligence in Healthcare
- Johns Hopkins University
Philadelphia
- Benefits Data Trust
- Drexel University
- Vanguard
Pittsburgh
- Preamble
- Carnegie Mellon University
- The Carl G. Grefenstette Center for Ethics in Science, Technology and Law at Duquesne University
- University of Pittsburgh
Before you go...
Please consider supporting Technical.ly to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.
Join our growing Slack community
Join 5,000 tech professionals and entrepreneurs in our community Slack today!