Software Development
State of Local Tech Month 2023

With NSF support, a group of tech and policy experts teamed up to build more trustworthy AI

The University of Maryland, George Washington University and more are teaming up with the National Science Foundation to launch the NSF Institute for Trustworthy AI in Law & Society.

UMD professors Hal Daumé III (left) and Katie Shilton. (Courtesy photo)

As of today, three local universities will be heading an effort to change the way we develop AI.

The University of Maryland (UMD), George Washington University (GWU) and Morgan State University joined Cornell University to launch the Institute for Trustworthy AI in Law & Society (TRAILS). The National Science Foundation (NSF)-backed center brings together AI and machine learning specialists with social scientists, legal scholars, educators and public policy experts to rethink AI and develop technology that attracts more public trust.

UMD professor Hal Daumé III, principal investigator and the director of TRAILS, told Technical.ly that most AI development traditionally revolved around autonomous systems that operate almost entirely on their own. This makes them disconnected from the real world, which is an issue with how much AI has infiltrated everyday life.

“Now that these systems are really touching people, there’s this big question of: Are the values that are baked into the systems that we’re developing matched well to the values of the people who will be impacted by the systems?” Daumé said.

Together, TRAILS experts want to promote trust and lower risks with AI, as well as support public knowledge of the technology. The institute is funded through a $20 million, five-year award from the NSF, though Daumé said the team will look for additional partnerships and investments to sustain the institute beyond that timeframe. The effort also includes partnerships with the National Institutes of Standards and Technology, DataedX Group, Arthur AI, Checkstep, FinRegLab and Techstars.

“It’s great to work with the Googles and Microsofts and Facebooks and so on of the world, but these voices get heard loudly already,” Daumé said. “So one of the things that we’re really hoping to do is build up a network of either smaller companies or things in the public sector that are just strongly impacted by AI systems and provide a hub where we can all learn from each other.”

TRAILS will explore four main research pathways for better AI development: participatory AI; developing advanced machine learning to reflect stakeholder values; evaluating how the public makes sense of AI and user trust; and participatory governance and trust. The four thursts will be led by UMD’s Katie Shilton, UMD’s Tom Goldstein, GW’s David Broniatowski and GW’s Susan Ariel Aaronson, respectively.

AI, Broniatowski said, moves at such a fast pace that it’s very quickly affecting all aspects of existence, so it’s important to be transparent in its use.

“Given the fact that AI is becoming more and more a part of our life, there’s always the possibility and there are concerns that these technologies are being used in a way that’s opaque, that’s not clear, to the people who are using them or who are being affected by them,” Broniatowski said. “And as a result, people don’t trust them and won’t trust them.”

With his thrust initiative, Broniatowski said his team will look at the current measures and algorithms available intended to promote trust in AI and boost transparency. But right now, he said, there’s little evaluation of what these measures actually do and whether or not they work. He thinks that what’s missing from the process is people with a deep understanding of human psychology measuring the technology. He and his team will therefore bring users in for user studies and structured evaluations on what these metrics mean for the public.

“We really need to think through the human consequences of this technology. The issue of trust really becomes front and center, because trust is really central to everything that we do in our society,” Broniatowski said. “It’s really what keeps our society together. ”

For the NSF, this is only the latest funding to establish a cohort of national AI research institutes. The Alexandria, Virginia-based independent agency has invested $140 million into AI institutes from two previous rounds of awards. It also coincides with the $140 million that the Biden administration announced today for new AI research centers.

“The National AI Research Institutes are a critical component of our nation’s AI innovation, infrastructure, technology, education and partnerships ecosystem,” said NSF Director Sethuraman Panchanathan in a statement. “[They] are driving discoveries that will ensure our country is at the forefront of the global AI revolution.”

Companies: George Washington University / University System of Maryland / Morgan State University / National Science Foundation
34% to our goal! $25,000

Before you go...

To keep our site paywall-free, we’re launching a campaign to raise $25,000 by the end of the year. We believe information about entrepreneurs and tech should be accessible to everyone and your support helps make that happen, because journalism costs money.

Can we count on you? Your contribution to the Technical.ly Journalism Fund is tax-deductible.

Donate Today
Engagement

Join our growing Slack community

Join 5,000 tech professionals and entrepreneurs in our community Slack today!

Trending

The looming TikTok ban doesn’t strike financial fear into the hearts of creators — it’s community they’re worried about

DC launches city-backed $26M venture fund for early-stage startups

Protests highlight Maryland’s ties to Israeli tech and defense systems

Influencers are news distributors now: Inside Technical.ly’s Creator in Residence Program

Technically Media