Technologists don’t have the luxury of ignoring the ethical considerations behind their work.
Or, at least, they shouldn’t, says Drexel professor Kris Unsworth. And there’s a growing constituency of academics and engineers who agree.
“Science isn’t just this pure thing away from ethics,” she said. “People need to know they go together. We can’t, as technologists, just step back and say, ‘Not my problem.’ We can do better.”
We reached out to Unsworth, whose research focuses on technology and ethics, after she tweeted out our story on a City Council hearing where city leaders struggled to make sense of a new data tool that they worry could reinforce decades of racist criminal justice policies.
https://t.co/uocURSvVnN via @TechnicallyPHL
"the designer is taking the role of an academic removed from ethics." Shouldn't be the case.— Kris Unsworth (@k_runs) September 20, 2016
At the hearing, the statistician charged with building the tool, Penn’s Richard Berk, told a city committee that the algorithm would involve tradeoffs: If it were the pinnacle of fairness, it would be less accurate.
Meaning, if the tool were blind to factors affiliated with race, like ZIP codes, it would make more mistakes.
“In the real world,” Berk said, “we can’t have it all.”
But what struck us the most was the position he took. The decision is in your hands, he told the committee. I’m just making the tool. My work here is done.
Later, when we asked him about this stance in general, he said it was a matter of playing to his expertise. “I have no legal training or any training as an ethicist. And I have no experience in the arm wrestling of politics,” Berk wrote in an email. “All I can say with confidence is that in criminal justice, like in most things, you can’t have it all.”
We wondered about this and the implications for the algorithms that are increasingly ruling our lives. Can technologists, for whatever reason, function outside the realm of ethics? Do they? And what does that mean, then, for these tools? Where does the responsibility lie?
As technology moves toward self-awareness, what moral role should coders play?
Unsworth believes it’s crucial for technologists to take on this burden, to be accountable to their creations and the people whom they affect.
“People who design algorithms have a responsibility to that fairness piece,” she said.
She’s not the only one who’s thinking this way. There’s a growing movement, she said, among technologists who are grappling with what it means to make responsible algorithms. It’s a shift from a more traditional, status-quo industry view. She likened it to how engineers moved to make technology more accessible in the ’90s.
Technology, she said, is moving into a more self-aware phase.
In July, Unsworth attended a conference in Germany called Data, Responsibly, co-organized by Drexel computer science professor Julia Stoyanovich. She’s part of a working group that includes those from both academia and industry that is developing a set of principles for “accountable algorithms.” She’s in the middle of a four-year, $300,000 National Science Foundation grant to study the ethics of algorithms with Drexel professor Kelly Joyce.
Some at Haverford College, like computer science professor Sorelle Friedler, and Penn, whose Fels Policy Research Initiative is hosting a series of talks on fairness and algorithms this semester, are also focused on these issues. There was also that Philly Tech Week 2016 presented by Comcast panel from Community Legal Services and Philadelphia Legal Assistance on the same topic.
Meanwhile, in Brooklyn, our sister site reported on former Kickstarter data chief Fred Benenson’s use of the phrase “mathwashing” to describe how mathematical models can “paper over a more subjective reality.”
In D.C., a Georgetown professor named Pablo Molina said technologists must develop stronger codes of conduct.
It’s a topic that’s also rising to national prominence, in part due to a new book by New York City-based author Cathy O’Neil called Weapons of Math Destruction. (She wrote a blog post about what bothered her about the hearing at City Council that we covered.)
OK, so people are thinking about this.
What of solutions to the fairness/accuracy tradeoff?
Because Unsworth, for her part, thinks we can “have it all.”
Unsworth advised creating an algorithm to test the fairness of the city’s forthcoming risk assessment model, which is still early in the production phase — the courts hope to get it running within the next two years. She said it’s important to self-critique these kinds of models. Transparency is crucial, too, she said, adding that judges who use this information should be able to explain how this tool is affecting their decision-making process.
In a broader view, she said that education is important. At Drexel, cybersecurity students are required to take her courses on ethics and information and policy. Might computer science students be next?
Lastly, the public should gain a better understanding of the subjectivity of algorithms. Too often, she said, people think that because it’s science, it’s objective. And, the thinking goes, since it’s objective, it’s morally neutral.
“What many of us are saying is, ‘That’s just not the case,'” she said.
Before you go...
Please consider supporting Technical.ly to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.
Join our growing Slack community
Join 5,000 tech professionals and entrepreneurs in our community Slack today!