Software Development
AI / Funding / Resources / Technology

AI for good? Why CMU’s Block Center funds research into responsible use of the tech

From improving Wikipedia content moderation to better understanding the skill sets of Appalachians, Dean Ramayya Krishnan says there's plenty to explore about how artificial intelligence can advance society.

Dean Ramayya Krishnan of CMU's Block Center for Technology and Science. (Courtesy photo)

A common view of artificial intelligence is that it’s eventually going to become so sophisticated that it will replace some human workers, or that it’s inherently sinister and should be regarded warily.

But the reality for researchers working on practical applications of AI is far from the Skynet of science fiction, and much of the work is focused on using the technology in ways that will make people’s lives easier, not displace humans entirely.

“I think a core misperception associated with this general sense of unease is the tendency to anthropomorphize machine intelligence and assume that current AI systems can accomplish tasks they just simply cannot,” Dean Ramayya Krishnan of Carnegie Mellon University‘s Block Center for Technology and Science told This serves to inflate the perceived risk of some kind of “AI takeover,” he added, “and distracts from conversations we should be having about AI, such as how to address biases that are inherent to any human-built technological system.”

The Block Center, which has a focus on artificial intelligence and analytics “for good, the future of work, and seeding societal futures,” announced its latest round of grants for AI-based research in June. Among them is professor Rick Stafford’s project, titled “Jobs in an Appalachian Clean Energy Transition: A Regional Skills-Matching.” Stafford, a distinguished service professor of public policy at Heinz College, is also a native of Waynesburg, Greene County.

“I grew up in Appalachia, so I have some feeling for how technology, economic forces, environmental changes have really left a lot of communities and the jobs that drove those communities behind,” Stafford said.

While much remains to be done in technology and policy required to realize responsible and trustworthy AI, this should not hold us back from wanting to harness the power and vast potential of this technology to serve the public good.

What he seeks to do with his AI project is better understand the kinds of skill sets that people in those communities have — he’s married to a coal miner’s daughter, by the way — to help them transition into the jobs of the future. Ideally, he said, those jobs will also help southwestern Pennsylvania transition away from its fossil fuel legacy. The future of energy provides opportunity for the transitions of a lot of these jobs, including steel workers, that have relied on the extraction industry.

“How can we look at solar on abandoned mines, growing kemp on abandoned mines? Maybe it’s building solar farms, wind farms, all kinds of decarbonization strategies that rely on efficiency approaches,” he said. “What we’re trying to do in the next year is really map out the potential job-to-job opportunities. How can we get some of these changes in public policy that can make a difference in people’s lives?

There are other examples, from both this latest grantee cohort and other research projects, of how AI research can be applied in meaningful ways.

The Block Center awarded a grant to an edtech project developed by CMU professors Ken Koedinger and  Lee Branstetter for a collaboration among the Block Center, CMU’s Human-Computer Interaction Institute, and the University of Pittsburgh’s Center for Urban Education. The project is studying the best ways to combine AI tutoring with human tutoring to help improve math learning. A pilot program focused on low-income students from historically disadvantaged groups demonstrated that the combination of computer and human tutoring nearly doubled the participants’ rate of math learning, according to the Block Center.

And Professor Haiyi Zhu received a Block Center grant for her research into AI content moderation, which aims to try to improve the quality of content moderation decisions with and without AI on Wikipedia.

Is it possible to dispel some of the negative perceptions of AI? Echoing sentiments we’ve heard from other Pittsburgh tech watchdogs, Krishnan said the technology has vast potential, but needs to be used responsibly. Accordingly, the Block Center set up a responsible AI initiative to support transparency in AI research.

“While much remains to be done in technology and policy required to realize responsible and trustworthy AI, this should not hold us back from wanting to harness the power and vast potential of this technology to serve the public good,” Krishnan said. “AI will continue to transform the way we live, work, and play for many years to come, and the Block Center is dedicated to advancing solutions and improving the design and implementation of these systems so we all can benefit from such a profound new technology.”

Companies: Carnegie Mellon University

Before you go...

Please consider supporting to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.

Our services Preferred partners The journalism fund

Join our growing Slack community

Join 5,000 tech professionals and entrepreneurs in our community Slack today!


How 3 local orgs help founders and entrepreneurs build their networks

Cal Ripken Jr. essay: The MLB legend explains his drive to build STEM centers in schools across the nation

The end of software as technology

How to build better innovation ecosystems: 10 key takeaways from the 2024 Builders Conference

Technically Media