Uncategorized
AI / Arts / Software

What Enron’s emails tell us about artificial intelligence

Brooklyn technoartists Sam Lavigne and Tega Brain want to send you every internal email from the Enron Corporation. It's a lot of emails.

Pick your poison. (Screenshot)

Do you know that many of the artificially-intelligent things we use in our everyday, quotidian lives “learned” how to “think” to varying degrees by studying the emails of some of the most craven and degraded capitalists in our deeply weird corporate history?
Brooklyn’s Sam Lavigne and Tega Brain have a new piece of internet art out called The Good Life (Enron Simulator). It’s a very simple piece. We first told you about it back in August, right after it won a Rhizome Net Art Microgrant. You input your email into a very Windows 95-looking website and the site sends you each of the 500,000 publicly-available emails from the Enron archives in the order they were sent. You can choose to receive these emails over the course of seven days, 30 days, one year or seven years. Depending on your choice, you’ll receive somewhere between 100,000 and 196 emails per day.
The emails were made available as part of the public domain in 2003 by the Federal Energy Regulatory Commission (FERC), two years after Enron’s colossal collapse and bankruptcy and subsequent criminal investigations. The half a million email-strong database was for years about the largest online collection of real people interacting with each other, and so was used by engineers and computer scientists as a linguistic resource for training programs to recognize natural language, and teaching those programs the ability to separate it from spam. The Good Life notes that, according to MIT’s Technology Review, “much of today’s software for fraud detection, counterterrorism operations, and mining workplace behavioral patterns over e-mail has been somehow touched by the [Enron] dataset.”
And here’s where the piece turns from a gag about the banality and volume of corporate fraud into something deeper.
“Machine learning systems inevitably reproduce the patterns and biases existing in the data used to train them,” Brain and Lavigne write in Rhizome. “The Enron corpus therefore reminds us that we need to be asking questions of who is represented in training datasets, what bias this produces, and how these systems then go on to be used.”
It’s a tremendously important point to consider as everything around us becomes artificially intelligent, ruled by algorithms. We wrote about bias in algorithms last year, when former Kickstarter data chief Fred Benenson coined the term “mathwashing.” An Eyebeam fellow, Mimi Onuoha, is also doing interesting work on data and bias and how we ought not think that computers are any less biased in the way they process the world than the humans that build them.

You can read more about it in Brain and Lavigne’s own words over at Rhizome. “There are many ways to enjoy the Enron corpus, but by far the most pleasurable is to read all 500,000 emails in the order they were sent,” they write.

We hope you’re feeling voracious.

Series: Brooklyn
Engagement

Join the conversation!

Find news, events, jobs and people who share your interests on Technical.ly's open community Slack

Trending
Technically Media