How this UMD researcher is using Google Street View to make the physical world more accessible - Technical.ly Baltimore

Dev

Jul. 30, 2015 10:05 am

How this UMD researcher is using Google Street View to make the physical world more accessible

Kotaro Hara has a system that points out what sidewalks aren't good for wheelchairs. It relies on both computer vision and human verification (via Amazon Mechanical Turk).

Tohme is a system for identifying accessibility challenges that need fixing.

(Screenshot via YouTube)

Google Street View has been adding features for perusing street art and finding out how cities used to look. But some bigger issues are still being overlooked.

Take accessibility. It’s been acknowledged as a civil rights issue. The Americans with Disabilities Act (which just turned 25) lays out many guidelines that require public places to be accessible, but progress remains uneven.

Quantifying those roadblocks, however, is a massive undertaking — especially when it comes to city streets, where curb cuts, sidewalks that can accommodate a wheelchair and other features are required at every intersection. And when it comes to navigation systems like Google Maps, most aren’t programmed to avoid problem sidewalks.

Kotaro Hara

Kotaro Hara. (Courtesy photo)

“This is partly an infrastructure problem, but it’s also an information problem,” Kotaro Hara told a Transportation Techies meetup that coincided with the Association of Commuter Transportation conference being held in Baltimore this week. “We don’t know where sidewalk problems are.”

Advertisement

But upon digging into those efforts, the University of Maryland Ph.D. student discovered imperfections.

Some relied on machine learning, but that ended up being inaccurate. Others relied on crowdsourcing to verify the sidewalks face-to-face.

That’s where Google Street View comes in. Hara found the tool can be used to collect data using computer vision. The method still requires people to verify, but becomes less labor intensive because it is remote and allows for paying people through task-oriented tools like Amazon Mechanical Turk.

But that still produced a lot of false information because of Google Street View’s inconsistency. So Hara set out to create a system that created more accuracy with less human time involved. The result is Tohme, about which Hara wrote an academic paper.

The system gathers data from maps. Then, it uses an algorithm to allocate the work into separate pipelines. If curb cuts in a Google Street View scene can be easily detected by computer vision, the work never gets passed on to a human. If it’s not so easy, then a human weighs in.

Hara and his team tested the system using images from Washington, D.C., Baltimore, Los Angeles and Saskatoon, Saskatchewan, Canada.

The accuracy was about the same as a purely human-based labelling approach, at about 85 percent.

“There are some false positives but it can be fixed pretty easily by humans,” he said.

Meanwhile, the amount of time put in by humans was down by 13 percent.

“This is a huge deal because we are looking at hundreds and thousands of sidewalks,” Hara said.

-30-
Stephen Babcock

Stephen Babcock is Market Editor for Technical.ly Baltimore and Technical.ly DC. A graduate of Northeastern University, he moved to Baltimore following stints in New Orleans and Rio Arriba County, New Mexico. His work has appeared in The New York Times, Baltimore Fishbowl, NOLA Defender, NOLA.com/The Times-Picayune and the Rio Grande Sun.

Advertisement

Sign-up for regular updates from Technical.ly

Do NOT follow this link or you will be banned from the site!