(Screenshot of a scene from "Coded Bias" via YouTube)
For content strategist and inclusive design expert David Dylan Thomas, issues of bias in artificial intelligence are nothing new. His 2020 book, “Design For Cognitive Bias,” focuses on that problem and possible solutions.
New technology carries the biases of its creators. For instance: In a July conversation with Technical.ly, Thomas pointed to an example of an Amazon hiring bot only hiring men as an example of sexist bias affecting hiring practices. According to Thomas, if an Amazon hiring bot using 10 years of resumes uses 10 years of men’s resumes as hiring data, its artificial intelligence will make it keep selecting men for jobs and inadvertently downplaying women’s resumes in the process.
“On the surface, it seems a fine way to design, but it ignores facts,” Thomas said. “A thoughtful way to design could say, ‘What if we could design the system in a way to make the bot look at women’s colleges?’ It’s very easy to design something like that to point the AI at the world that we’ve got.”
Artificial intelligence is often mythologized in media, such as the animated science fiction show “The Jetsons,” where the titular family has a robot maid, Rosie, that does its cleaning and housekeeping. But our reality may be closer to the dystopian vignettes of British sci-fi anthology “Black Mirror.”
“Coded Bias,” a documentary about bias in artificial intelligence that is now available to stream on Netflix, works to answer that question and provide context around a topic that impacts the way in which many people around the world live. Here are four things we learned about AI from the doc.
AI can reflect existing race and gender bias.
As part of an assignment, Massachusetts Institute of Technology Media Lab Ph.D. candidate Joy Buolamwini, a Black woman, built a tool designed to help her see herself in images of people that inspired her. When she focused the computer camera on her face, the tool was not effective. But when she wore a white mask, the tool worked extremely well.
“We teach machines to see by providing examples,” she said. Of the successful models, “the majority were men and lighter skin individuals.”
The experience showed Buolamwini how machine bias can amplify sexism, racism and colorism.
Thomas believes that people of color can be better represented in AI systems if more people of color program the systems.
“AI systems inherit the biases of the people who program them,” he told Technical.ly this week. “If those people are people of color, the AI is less likely to inherit biases that poorly represent or negatively impact people of color.”
Over 117 million people in the U.S. have their pictures in a facial recognition system.
Citing a Georgetown Law Center on Privacy and Technology report, Buolamwini said that more than 117 million people in America have their pictures in a facial recognition system. The FBI has used its own facial recognition system since 2011, but state and city police around the country have their own systems that are unregulated.
State officials using facial recognition to monitor citizens can feel more in line with a police-state and presents perils that report co-author and Georgetown Law Center on Privacy and Technology Executive Director Alvaro Bedoya called “uncharted and frankly dangerous territory.”
“By using face recognition to scan the faces on 26 states’ driver’s license and ID photos, police and the FBI have basically enrolled half of all adults in a massive virtual line-up,” he said. “This has never been done for fingerprints or DNA.”
Less than 12% of AI researchers are women.
Just as gender and race disparities exist in tech overall, disparities in artificial intelligence research can often spell doom for users that don’t reflect designers’ biases. One moment in the documentary showed how algorithms can be used to determine whether or not people get credit cards, housing, and in one instance of the documentary, who gets hired at one of the world’s most famous companies — Amazon.
As Thomas described, the online merchant made headlines in 2018 when it had to abandon a recruiting engine that showed an affinity for male job applicants. Resumes that indicated a candidate had attended a women’s college or participated in a women’s club or sport were avoided, whereas men candidates made it further in the hiring process.
With AI often reflecting the biases of its creators, only 12% of AI researchers being women presents issues that could be fixed by a wider diversity of AI researchers and designers.
Nearly 4 million Americans lost their homes in 2008 because of AI.
Black Americans experienced a significant setback in 2008 from which many are still reeling.
In one surreal scene, “Weapons Of Math Destruction” author Cathy O’Neil discussed her time working at a hedge fund in 2006. After quickly learning that the experience was centered on making money at any expense, she came to a stunning realization: The economic collapse that followed two years later and its resulting damage were because of an algorithm. Black Americans were disproportionately impacted.
“We saw the largest elimination of Black wealth just like that,” she said.
Michael Butler is a 2020-2021 corps member for Report for America, an initiative of The Groundtruth Project that pairs young journalists with local newsrooms. This position is supported by the Lenfest Institute for Journalism.
Ambler’s Phenom raises a $100M Series D after acquiring 3 companies in 6 months
Coded by Kids is bringing an inclusive lens to the Mentor Connect founders program
Pod notes: WHYY and Princeton launch ‘A.I. Nation’ exploring how the tech plays out in our everyday lives
Coming soon: Meet Cavalry, an app for accountability during police encounters
Sign-up for daily news updates from Technical.ly Philadelphia