Software Development
Academia / AI / Education / Events / Nonprofits

Why CMU launched the Responsible AI initiative

The latest effort from the university is one of many recent local pushes for more ethical and responsible use of emerging technology. Here's what experts across government, academia and business say about the topic's importance.

Carnegie Mellon University. (Photo by Flickr user Tony Webster, used via a Creative Commons license)
Amid local and national discussion — and concern — about the same topic, Carnegie Mellon University is launching a new program focused on ethical and responsible use of artificial intelligence.

Responsible AI, as it’s called, is an initiative out of the Oakland-based university’s Block Center for Technology and Society with support from faculty in the School of Computer Science and members of the Dean’s Office staff. Its goal is to convene experts across areas of expertise such as computer science, engineering, public policy and business to further the initiative’s missions. Those include translating research into policy and social impact; building community and serving local and global communities; providing new education and training; and establishing partnership beyond the university to promote the responsible use of AI.

It’s one of many recent efforts locally to ensure that technologists out of the AI hub are learning how to use it responsibly. Elsewhere in Pittsburgh, the Partnership to Advance Responsible Technology issued an inaugural report earlier this year on the topic, and a recent study from the University of Pittsburgh’s Institute for Cyber Law, Policy and Security reviewed local use of public algorithms.

The initiative launched this week with a panel discussion of experts in AI use and policy, including representatives from the White House, the Patrick J. McGovern Foundation, Salesforce and CMU’s School of Computer Science. The hour-long discussion centered on the vision for the new initiative, as well as the common problems identified around AI across academia and the public and private sectors and was moderated by Rayid Ghani, a distinguished career professor in the CMU Machine Learning Department and Heinz College of Information Systems and Public Policy.

“Basically the core of this initiative — it’s not about developing AI for the sake of AI, but it’s very much anchored in people, in communities and societal issues, and in supporting efforts across CMU that take our work and lead to to impact,” Ghani said at the start of the event.

Panelists for the launch event for Responsible AI at CMU. (Courtesy photo)

The discussion covered the risks associated with unregulated AI use that each of the panelists had experienced in their careers as well as the ways that AI has also led to real progress in their respective fields.

Sorelle Friedler, the assistant director for data and democracy at the White House Office of Science and Technology Policy, expressed concern around the lack of understanding around what exactly any given AI system is doing. That’s a risk to basic science research, “but we also have those same issues when it comes to deploying AI in areas that more directly impact people,” she said. “And so we need to make sure that we understand what these systems are doing and that, in the case of high stakes systems that we understand those really quite well, and can ensure that these are decisions that we want to be made in that way.”

Other panelists expressed concern about unintended consequences around the use of AI across all industries. Claudia Juech, the VP of data and society at the Patrick J. McGovern Foundation, shared worries of how the nonprofit industry’s increasingly widespread use of data and AI could put those they aim to serve at risk down the road. While nonprofits often have good intentions, their work with some of the most vulnerable populations means that AI use toward their missions should be considered very carefully.

“What they might be doing today might be safe and adhere with standards but the question of what might that enable governments in the future [to do] is definitely something that is fairly high on my mind,” Juech said.

Meanwhile, Hoda Heidari, an assistant professor in the CMU Machine Learning Department and the Institute for Software Research, shared her experience in research around using machine learning methods to address discrimination and bias. While there have recently been more efforts to make AI system development more participatory for all stakeholders, “I would say that it is these kinds of participatory frameworks are limited in scope,” Heidari said. Often, the system architects are asking for input from communities they haven’t established communication-based relationships with yet. “So the question should be, how do we build those relationships?”

Still, there were signs of hope and attitudes toward progress expressed among the panel as well. Paula Goldman, the chief ethical and humane use officer at Salesforce, shared decisions that the company had made to try and ensure responsible use of AI in its tech platforms. “We think a lot about, what is the end impact on society and how do we create a feedback loop,” she said. Internally, Goldman said Salesforce formed an ethical use advisory council to encourage deliberation about the technologies the company creates.

It’s one of the reasons Salesforce didn’t go into facial recognition as a product, and prevents its customers from using AI vision technology for that purpose. Goldman argued that similar feedback systems involving stakeholders of all backgrounds should be involved in decisions around the responsible use of AI.

Those interested in learning more about CMU’s new initiative, or joining the work, are encouraged to reach out to responsibleAI@cmu.edu.

Sophie Burkholder is a 2021-2022 corps member for Report for America, an initiative of The Groundtruth Project that pairs young journalists with local newsrooms. This position is supported by the Heinz Endowments.
Companies: Carnegie Mellon University
Engagement

Join the conversation!

Find news, events, jobs and people who share your interests on Technical.ly's open community Slack

Trending

Pittsburgh weekly roundup: Neighborhood newspaper for tech; Bartel honored at CIO awards; $204M for broadband internet

Revitalizing our coverage to connect Pittsburgh's tech and startup scene

This Week in Jobs: Don't sleep on these 23 tech career opportunities

Pitt startup team wins 2024 Big Idea Competition with innovative neurosurgery device

Technically Media