If you’re wondering why a company’s staff lacks diversity, you might want to take a look at the computers behind their hiring process.
Corporations are using technology in the hiring process in order to remedy historical and routine applicant discrimination, but the same technology can end up simply reinforcing this discrimination, said postdoctoral research associate Solon Barocas during “The Intersection of Data and Poverty,” a Philly Tech Week 2016 presented by Comcast symposium organized by Community Legal Services and Philadelphia Legal Assistance and held at Montgomery McCracken Walker & Rhoads in Center City. Barocas spoke on a panel about “How Big and Open Data Harms the Poor,” which was focused on the unintended consequences of data technology on vulnerable populations.
Companies that use machine learning and big data in their hiring process use “training data,” which is typically taken from prior and current employees. A statistical process then automatically discovers the traits that correlate to high performance among the training data and looks for those traits in the applicant pools. “For more and more companies, the hiring boss is an algorithm,” a 2012 Wall Street Journal article reads.
The unspoken assumption for the incorporation of this process into hiring is that hiring managers are overtly or covertly racist, misogynistic, or otherwise prejudiced (after all, if they weren’t, this process wouldn’t be necessary). But because prior hirings could have been informed by prejudice, the training data reflects that — these data sets aren’t so much neutral analyses of employee performance as they are reflections of centuries of structural exploitation and disadvantage, Barocas argued.
— Amy Laura Cahn 🔥 (@amylauracahn) May 5, 2016
He also mentioned an example when Xerox tapped data scientists from Evolv Solutions to assist with the hiring process for their call centers. Evolv found that the single best predictor for employee tenure was distance from work. However, Evolv requested that Xerox drop this variable from their applicant considerations because it was so highly correlated with race.
If you have zero employees who are women, people of color, or people with disabilities, it’s impossible to evaluate their potential performance through machine learning, Barocas noted. Additionally, if such an employee were in the training pool, it’s possible they faced a hostile work environment and were passed over for promotions in spite of high performance, further muddying the training data. Barocas argued that the idea of big data is comforting because one assumes it collects all data, but in reality, only convenient data is being collected.
— Maggie Potter (@MaggieMPotter) May 5, 2016
Seemingly rational business decisions can have a striking resemblance to overt racism. Barocas noted how just several weeks ago, Amazon was blasted by Bloomberg for not offering its premium Amazon Prime service in certain parts of various cities. Amazon maintained that they did this because there simply wasn’t enough customer density in those places for Prime to be financially viable. However, when Prime availability is mapped, you could easily confuse them for historical maps of redlining.
— jsgro (@sgrojon) May 5, 2016
In the Bloomberg article, Craig Berman, Amazon’s VP of Global Communications is quoted as saying “Demographics play no role in it. Zero.” Sounds to me like your average post-Archie Bunker deflecting criticism by saying they don’t see race.
Barocas wondered if corporations should go against their financial interests and try to correct for the lasting effects of historical injustice. If Amazon is any example, they may not, until the government forces their hand.