Despite practitioners’ best efforts, there are still a few things standing in the way of responsible AI engineering’s widespread adoption.
In the inaugural conference from the Partnership to Advance Responsible Technology (PART), the Responsible Technology Summit at Pittsburgh’s Phipps Conservatory and Botanical Garden brought together experts in responsible innovation from across the country to discuss industry challenges and possible solutions.
Beyond a panel on disinformation and tech and the keynote address from data activist Renée Cummings, another noteworthy conversation at the event included a fireside chat between Matt Gaston, Carnegie Mellon University Software Engineering Institute’s director of the AI division, and Rachel Dzombak, the SEI’s digital transformation lead for the Emerging Technology Center. The two spent an hour discussing responsibility considerations specifically related to artificial intelligence engineering.
Some highlights for that conversation for those building future solutions for the industry:
We need to understand how AI can fail
Gaston noted that there’s a lot to be excited about in the world of AI applications today. The amount of coverage in general media about computer vision, machine learning and natural language processing has increased, for instance, as those applications have seen huge strides in functionality and improvement over the past decade.
AI engineers need to be more public about the shortcomings of the systems, along with highlighting the new innovation they afford.
But within all that excitement, there’s also been a distraction from knowing the setbacks that still exist, and the failures that have led to the successes.
“There hasn’t been a whole lot of work into understanding when those systems fail, how they failed, what to do about failure, and how to ensure you can mitigate those risks and failures associated with it,” Gaston said in the fireside chat. “So AI engineering is a nascent field. We’re using engineering to sort of elicit the traditional engineering disciplines behind it. It’s really about creating that discipline, and creating the best practices, processes and tools to do AI, as well.”
That means that AI engineers need to be more public about the shortcomings of the systems, along with highlighting the new innovation they afford. For example, Gaston and Dzombak pointed to applications like facial recognition, that, while useful in some instances, have also been shown to discriminate on the basis of race. Setbacks like that should be investigated and communicated to the public, Gaston said.
What’s missing from the responsible AI conversation?
A concern that dovetails with the need for more comprehensive communication about system failures is a need for more comprehensive and diverse voices in the room where AI engineering is happening, Dzombak said. If the industry wants to push for new questions to be asked of emerging technology, then maybe the answer to achieving that is by bringing new perspectives to to the table.
“Who self-identifies as an AI engineer is a massive challenge that we’re facing,” she said. True responsible AI engineering requires such a wide range of skillsets that it “cannot be done by a single person,” Dzombak argued. “There’s absolutely no way that one person can have the quantitative, the qualitative, all of the approaches [needed]. It has to be done by a team.”
The problem is that different teams can speak in radically different ways about the exact same topic. So ensuring that the language used to understand the entire implementation process of a new AI application can ensure that people from so many different experiences and backgrounds have a chance to communicate their concerns in a clear and universal way.
To foster widespread responsibility in the industry, there needs to be investment and regulation
In response to what’s needed to get these responsible tech goals implemented across all organizations and companies working in AI, Gaston called for more investment. The fact of the matter, he said, is that responsible tech development isn’t always the norm, and it willy likely require money, regulations and a broader industry push to get there.
Responsible tech development isn't always the norm, and it willy likely require money, regulations and a broader industry push to get there.
“I’m starting to see glimmers of going deeper into turning ideas around responsible AI into tools and mechanisms and practices,” he said. “But I think we need an influx of a whole lot more investment, a whole lot more research, and then ultimately driving a part of that AI engineering discipline that allows and enables engineers and the teams around them to actually build these systems.”
Dzombak echoed his sentiments, saying that while the “move fast and break things” moment is over in much of the tech world, there are some who still practice that mindset. While some in the industry have adopted a responsible tech mindset of stopping development that’s moving too quickly and ensuring all outcomes and consequences are thought of ahead of time, she acknowledged that there’s always at least one engineer who will keep running with the work.
That, she said, is where a “tactical implementation” of a responsible innovation mindset at all levels will help.
Sophie Burkholder is a 2021-2022 corps member for Report for America, an initiative of The Groundtruth Project that pairs young journalists with local newsrooms. This position is supported by the Heinz Endowments.Before you go...
To keep our site paywall-free, we’re launching a campaign to raise $25,000 by the end of the year. We believe information about entrepreneurs and tech should be accessible to everyone and your support helps make that happen, because journalism costs money.
Can we count on you? Your contribution to the Technical.ly Journalism Fund is tax-deductible.
Join our growing Slack community
Join 5,000 tech professionals and entrepreneurs in our community Slack today!