In the near darkness of a 19th-century lecture hall, during the week right before Halloween, a mechanical figure of gargantuan proportions appeared before a few dozen people huddled inside the Wagner Free Institute of Science in North Philly.
As the audience stared into the belly of this beast, they were informed of its more terrifying capabilities: It could not only eat grain, but also defecate.
The image I and others were staring at was a large-scale, projected picture of the Digesting Duck, a mechanical contraption constructed by Jacque de Vaucanson in 1739. While the Frenchman’s 18th-century duck was actually incapable of digesting grains on its own power, Vaucanson hoped that a future duck automaton — what we today call a robot — would be able to ingest, digest and defecate grain on its own accord.
Context, lo: The Digesting Duck was one image that kicked off a conversation called “Dancing with Droids,” a 21st-century exploration of humans’ perceptions — good or evil — of robots.
As self-driving cars creep closer and Stephen Hawking warns us that artificial intelligence could doom humankind, University of Pennsylvania historian Adelheid Voskuhl and Drexel University engineer Youngmoo Kim came together for a jaunty lecture on why robots conjure up such an unwariness — as well as how robots might help humans better understand ourselves.
“Often, but not always, robots are perceived as scary, threatening, or disturbing,” Voskuhl said.
Indeed, animating much of the fear, perceived or actual, of robots is the worry over whether a mechanical system that can think for itself can impact the physical world. What happens when IBM’s Watson is tired of answering Alex Trebek’s questions and instead launches an ICBM at an unsuspecting ally?
Such fears are the stuff of hyperbole, but these hyperbolic fears have existed for centuries.
Little things — opening doorknobs, going up steps — we take for granted are absolutely confounding for robotics systems right now.
In charting a course through the history of automatons and robots, Voskuhl noted how previous generations, in the U.S. and Europe, have worried that robotic beings would take our jobs and threaten our livelihoods. Yet the advancement in robotic technology, she noted, has prompted humans since the Enlightenment Age — which is when early, music-playing automatons were developed by father-son teams of artisans in Switzerland — to make a closer examination of what it means to be goal-driven, rational beings, whose existence extends beyond mere physiology.
“Robots provoke in ourselves a desire to prove that we are human and not merely machines,” she said.
It’s a point Kim picked up on during the second half of the lecture.
At Drexel’s ExCITe Center, where Kim and students have experimented with a wide array of music-playing robots, any musical renditions performed by robots always involve pre-programming courtesy of humans. Even newer robots that are able to incorporate real-world feedback — like recognizing when they strike a wrong note on a keyboard, and correcting the error in programming — still are incapable of understanding anything about music that hasn’t been imparted by humans. Yet what we learn by watching a robot play music, Kim said, can also inform how humans approach certain tasks — in this case, the creative arts.
“We don’t know how to quantify expressive behavior,” he said. “But in trying to replicate [that in robots], we better understand humans.”
Kim ended with a dose of assuagement: Do not fear the robot uprising.
“Our robots are actually not very intelligent at all,” Kim said, referring to robots that were made to navigate an obstacle course during the DARPA Robotics Challenge last May. “Little things — opening doorknobs, going up steps — we take for granted are absolutely confounding for robotics systems right now.”
Knowledge is power!
Subscribe for free today and stay up to date with news and tips you need to grow your career and connect with our vibrant tech community.