Technologists must do better: Drexel prof on the ethics of algorithms - Technical.ly Philly

Dev

Sep. 30, 2016 11:57 am

Technologists must do better: Drexel prof on the ethics of algorithms

As Philadelphia tries to make sense of a potentially racist criminal justice algorithm, we asked one expert about who should be responsible.

Kristene Unsworth in her office at Drexel.

(Photo by Juliana Reyes)

Technologists don’t have the luxury of ignoring the ethical considerations behind their work.

Or, at least, they shouldn’t, says Drexel professor Kris Unsworth. And there’s a growing constituency of academics and engineers who agree.

“Science isn’t just this pure thing away from ethics,” she said. “People need to know they go together. We can’t, as technologists, just step back and say, ‘Not my problem.’ We can do better.”

We reached out to Unsworth, whose research focuses on technology and ethics, after she tweeted out our story on a City Council hearing where city leaders struggled to make sense of a new data tool that they worry could reinforce decades of racist criminal justice policies.

At the hearing, the statistician charged with building the tool, Penn’s Richard Berk, told a city committee that the algorithm would involve tradeoffs: If it were the pinnacle of fairness, it would be less accurate.

Meaning, if the tool were blind to factors affiliated with race, like ZIP codes, it would make more mistakes.

“In the real world,” Berk said, “we can’t have it all.”

But what struck us the most was the position he took. The decision is in your hands, he told the committee. I’m just making the tool. My work here is done.

Later, when we asked him about this stance in general, he said it was a matter of playing to his expertise. “I have no legal training or any training as an ethicist. And I have no experience in the arm wrestling of politics,” Berk wrote in an email. “All I can say with confidence is that in criminal justice, like in most things, you can’t have it all.”

Advertisement

We wondered about this and the implications for the algorithms that are increasingly ruling our lives. Can technologists, for whatever reason, function outside the realm of ethics? Do they? And what does that mean, then, for these tools? Where does the responsibility lie?

As technology moves toward self-awareness, what moral role should coders play?

Unsworth believes it’s crucial for technologists to take on this burden, to be accountable to their creations and the people whom they affect.

“People who design algorithms have a responsibility to that fairness piece,” she said.

She’s not the only one who’s thinking this way. There’s a growing movement, she said, among technologists who are grappling with what it means to make responsible algorithms. It’s a shift from a more traditional, status-quo industry view. She likened it to how engineers moved to make technology more accessible in the ’90s. Technology, she said, is moving into a more self-aware phase.

In July, Unsworth attended a conference in Germany called Data, Responsibly, co-organized by Drexel computer science professor Julia Stoyanovich. She’s part of a working group that includes those from both academia and industry that is developing a set of principles for “accountable algorithms.” She’s in the middle of a four-year, $300,000 National Science Foundation grant to study the ethics of algorithms with Drexel professor Kelly Joyce.

Some at Haverford College, like computer science professor Sorelle Friedler, and Penn, whose Fels Policy Research Initiative is hosting a series of talks on fairness and algorithms this semester, are also focused on these issues. There was also that Philly Tech Week 2016 presented by Comcast panel from Community Legal Services and Philadelphia Legal Assistance on the same topic.

Meanwhile, in Brooklyn, our sister site reported on former Kickstarter data chief Fred Benenson’s use of the phrase “mathwashing” to describe how mathematical models can “paper over a more subjective reality.”

fred-benenson-meme

In D.C., a Georgetown professor named Pablo Molina said technologists must develop stronger codes of conduct.

It’s a topic that’s also rising to national prominence, in part due to a new book by New York City-based author Cathy O’Neil called Weapons of Math Destruction. (She wrote a blog post about what bothered her about the hearing at City Council that we covered.)

OK, so people are thinking about this.

What of solutions to the fairness/accuracy tradeoff?

Because Unsworth, for her part, thinks we can “have it all.”

Unsworth advised creating an algorithm to test the fairness of the city’s forthcoming risk assessment model, which is still early in the production phase — the courts hope to get it running within the next two years. She said it’s important to self-critique these kinds of models. Transparency is crucial, too, she said, adding that judges who use this information should be able to explain how this tool is affecting their decision-making process.

In a broader view, she said that education is important. At Drexel, cybersecurity students are required to take her courses on ethics and information and policy. Might computer science students be next?

Lastly, the public should gain a better understanding of the subjectivity of algorithms. Too often, she said, people think that because it’s science, it’s objective. And, the thinking goes, since it’s objective, it’s morally neutral.

“What many of us are saying is, ‘That’s just not the case,'” she said.

-30-
Juliana Reyes

Juliana Reyes was Technical.ly's editorial product lead after reporting on the Philadelphia tech scene for four years. She's co-president of the Asian American Journalists Association Philadelphia chapter and a two-time Philadelphia News Award winner for "Community Reporting of the Year." The Bryn Mawr College grad lives in West Philly, likes her food spicy and wears jumpsuits often.

  • jennifer

    This fits in with the alt.Chi paper that Jen Rode led on racism and technology.

  • Hellcats

    Wow. Let me see: “…the public should gain a better understanding of the subjectivity of algorithms. Too often, she said, people think that because it’s science, it’s objective”, “…What many of us are saying is, ‘That’s just not the case.’ So basically she is first of all denying that objective truth exists, and second that science is a successful process for discovering it. The empirical evidence is against her. But her beef is actually more with mathematics (algorithms) than science in general, and more specifically with numerical optimization (regression, clustering, correlation analysis, machine learning, mathematical programming etc.) It sounds like she is asking programmers to build into mathematical algorithms some sense of ethics or “fairness” in a fundamental way. This is patently absurd. Any attempt to de-tune, or otherwise incorporate attributes of ethics or “fairness” into algorithms themselves, or into code using the algorithms is essentially adding “bugs” to the application on purpose. Should programmers be the gate-keepers of ethics and fairness in society? How would that work? Where is the accountability? How can a computer scientist predict all the ways in which his algorithm may be used and prevent only “unethical” uses? Should automobile manufacturers endeavor to prevent unethical uses of their cars (e.g. prevent transporting stolen merchandise, or as a get-away vehicle)? I think that the stance taken by Richard Berk is the correct one: leave the question of the ethics of application of an algorithm to those responsible for the outcome of its usage, and make clear what are the trade-offs. Anything else is not only silly, but dangerous.

    • pushMatrix

      The tool hasn’t been built yet, and the aim of the hearing was to discuss how bias in policing can effect the data. If someone gets arrested for unpaid parking tickets, spends a few days in jail, is released and is arrested again a year later because of inability to pay the same fines, how’s that going to effect the data? If someone who’s homeless is arrested for trespassing on multiple occasions, how’s that going to effect the data? It depends on what types of factors are included in the statistical models.

Advertisement

Sign-up for regular updates from Technical.ly

Do NOT follow this link or you will be banned from the site!