Google Street View has been adding features for perusing street art and finding out how cities used to look. But some bigger issues are still being overlooked.
Take accessibility. It’s been acknowledged as a civil rights issue. The Americans with Disabilities Act (which just turned 25) lays out many guidelines that require public places to be accessible, but progress remains uneven.
Quantifying those roadblocks, however, is a massive undertaking — especially when it comes to city streets, where curb cuts, sidewalks that can accommodate a wheelchair and other features are required at every intersection. And when it comes to navigation systems like Google Maps, most aren’t programmed to avoid problem sidewalks.
“This is partly an infrastructure problem, but it’s also an information problem,” Kotaro Hara told a Transportation Techies meetup that coincided with the Association of Commuter Transportation conference being held in Baltimore this week. “We don’t know where sidewalk problems are.”
But upon digging into those efforts, the University of Maryland Ph.D. student discovered imperfections.
Some relied on machine learning, but that ended up being inaccurate. Others relied on crowdsourcing to verify the sidewalks face-to-face.
That’s where Google Street View comes in. Hara found the tool can be used to collect data using computer vision. The method still requires people to verify, but becomes less labor intensive because it is remote and allows for paying people through task-oriented tools like Amazon Mechanical Turk.
But that still produced a lot of false information because of Google Street View’s inconsistency. So Hara set out to create a system that created more accuracy with less human time involved. The result is Tohme, about which Hara wrote an academic paper.
The system gathers data from maps. Then, it uses an algorithm to allocate the work into separate pipelines. If curb cuts in a Google Street View scene can be easily detected by computer vision, the work never gets passed on to a human. If it’s not so easy, then a human weighs in.
Hara and his team tested the system using images from Washington, D.C., Baltimore, Los Angeles and Saskatoon, Saskatchewan, Canada.
The accuracy was about the same as a purely human-based labelling approach, at about 85 percent.
“There are some false positives but it can be fixed pretty easily by humans,” he said.
Meanwhile, the amount of time put in by humans was down by 13 percent.
“This is a huge deal because we are looking at hundreds and thousands of sidewalks,” Hara said.
Before you go...
Please consider supporting Technical.ly to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.
Join our growing Slack community
Join 5,000 tech professionals and entrepreneurs in our community Slack today!