I always wondered what it’d be like to have an app that could translate American Sign Language in real time.
Earlier this month, I got the chance to make that idea a reality during Drexel’s Dragon Hacks hackathon. We made a setup that let a camera read your hand movements in American Sign Language, translate those hand movements to English and then text those translated words to your phone in real time. Our team, made up of students from the University of Maryland, University of Pittsburgh, Penn State and Drexel, thought it could be a way for people who didn’t understand ASL to communicate with a person with hearing loss — and the judges agreed that it was a good idea: our hack won first place.
Here’s a look at our process over the weekend.
Our initial idea was a hardware hack that used eye tracking via computer vision to find out if truck drivers fell asleep at the wheel. With an Arduino microcontroller connected, it would sound a buzzer if the app found the driver’s eyelids closing.
When playing around with the Microsoft Kinect Motion Sensor and seeing its capability, however, we found sign language recognition to be a much better application. Moreover, in terms of getting a comprehensive app done over the weekend, the ASL recognition app seemed more feasible. In addition, one of our team members, the University of Pittsburgh’s Solomon Astley had a friend in high school who was deaf. He learned to sign in order to communicate with his friend and already knew some phrases in ASL coming into the hackathon.
Some of our group members were also inspired by the fact that over 5 percent of the world’s population, or about 360 million people, suffer from debilitating hearing loss. This includes one-third of the population over 65 years of age. Helping the Deaf community was something we could get behind.
Our team would eventually use open source American Sign Language recognition software with Twilio’s API to send text messages about what was being signed into the Kinect Motion Sensor Camera. Twilio lets developers send text messages to mobile phones using web apps deployed in the cloud.
Essentially, the Kinect could be mounted on a hat, it would take in what the person opposite to the wearer was signing and feed it into the app that translated it into English. It would then push the text to a Node.js web app deployed on IBM Bluemix that integrated with Twilio’s API. This would result in a text message regarding what was signed being sent to the wearer’s phone.
Towards the end, most of the team found themselves learning several phrases in sign language for the demo.
During the demo, we found a few people who knew ASL that came up and tried signing into the camera. We’d set up the app to take in phrases but not individual letters in ASL. That made us a little nervous when people began trying to spell out words in ASL.
Initially we had aimed to win the best social impact hack award. That prize ended up going to an app that gamified arthritis prevention. At that point we didn’t think we’d win. When our team was announced as the winning hack, we didn’t believe it at first. Most of our team members were first-time hackers that came into the weekend to learn something but found themselves winning the entire hackathon instead. It was an experience fraught with worry in the last few moments of finishing our app, but the win left us brimming with joy.
The winning team consisted of Solomon Astley and Alexis Aquiatan from the Pitt, Shawn Wali from the University of Maryland, Yahya Saad from Penn State and myself, Nigel Coelho, from Drexel.
Going forward we’re going to try to see if we could integrate with the Myo Armband to have a second input in order to confirm the accuracy of certain signs by a person. It proves to be something that can really help people communicate better.