Civic News
AI / Federal government / Funding / Technology / Universities

CMU won $20M to create a new institute focused on ‘human-centric’ AI solutions

With funding from the National Science Foundation, the AI Institute for Societal Decision Making's aim is to bring social scientists and artificial intelligence researchers together to figure out solutions to societal problems.

AI Institute for Societal Decision Making Co-Director Aarti Singh. (Courtesy photo)

From the AI-driven startups that are born on its campus to the Block Center for Technology and Society Center, Carnegie Mellon University has never shied away from artificial intelligence.

Now, the Pittsburgh institution has announced it’ll be leading the AI Institute for Societal Decision Making, thanks to a $20 million award from the National Science Foundation.

The new institute’s co-director, Aarti Singh, told Technical.ly the institute is going to bring social sciences and tech research together to figure out how humans and the technology can better interact.

“For [the] maximal impact of these technologies, we need to have social scientists and AI researchers collaborate to come up with solutions that will leverage AI capability while ensuring social acceptance,” Singh said.

Some of the areas in which AI can be of assistance, Singh said, is in improving responses to natural disasters as well as addressing problems that exist in the public health sector. Within the institute, the plan is to create tools that take a “human-centric” approach to tackling such issues, including maternal health. (The Center for Disease Control estimates that for 2021 the maternal mortality rate was 32.9 deaths per 100,000 live births).

“For pregnant people, we can make recommendations based on their past behavior [and] information we have about their health characteristics,” Singh said. “We will be able to recommend when they need to follow up on an appointment or if they are at risk of developing something like preeclampsia postnatal depression.”

Singh stressed that the interventions suggested by the institute would only be effective if they were made by people who have an understanding of how human decisions are made. Although AI is capable of absorbing a great deal of data, Singh said she understands that policymakers and communities can’t be expected to accept the institute’s recommendations if they don’t feel confident that the technology can be used in a fair and equitable manner.

The institute counts a staff of 30 which is comprised of scientists and researchers from the university’s own School of Computer Science and Dietrich College of Humanities and Social Sciences, as well as researchers from institutions such as Harvard University, Boston Children’s Hospital, Howard University and Penn State.

“Our instinct is really built on a lot of social scientists, cognitive scientists, behavioral scientists coming together with AI researchers to identify when should AI even be used [and] what is the right way AI should interact with humans,” Singh said. “So these are some of the broader questions that we set out to answer as part of this institute.”

In addition to creating AI tools, according to Singh, the AI Institute for Societal Decision Making will be doing a great deal of outreach to educators. This means some of the institute’s professors will assist community colleges in writing their AI curriculums, and others will have a hand in building on the efforts some high teachers have already made in teaching their students about AI. So far, Singh said, the institute is slated to work with roughly 40 public schools in the area. Within the university itself the team is hoping to create cross-disciplinary courses and degrees that merge computer science and the humanities.

Of course, there’s a portion of the public — from former Google employees to average citizens — that isn’t yet sold on AI. Singh says the institute has every intention of reaching out to the public to hear their concerns while taking into account the biases, perceptions and risk factors that people care the most about.

“I know there’s a lot of apprehension about AI right now, but we’re really not just releasing tools in the wild to be trained on any kind of algorithm,” Singh said. “We’re actually going to be working with the stakeholders, and making sure that the algorithms are optimizing what the stakeholders care about. You’re using data that’s actually vetted.”

Atiya Irvin-Mitchell is a 2022-2024 corps member for Report for America, an initiative of The Groundtruth Project that pairs young journalists with local newsrooms. This position is supported by the Heinz Endowments.
Companies: Carnegie Mellon University / National Science Foundation
Engagement

Join the conversation!

Find news, events, jobs and people who share your interests on Technical.ly's open community Slack

Trending

Where to watch the April 8 solar eclipse in Pittsburgh

How venture capital is changing, and why it matters

What company leaders need to know about the CTA and required reporting

Why the DOJ chose New Jersey for the Apple antitrust lawsuit

Technically Media