Civic News

Americans are divided on the use of some leading AI applications, despite an overall openness

A new report from the Pew Research Center suggests that companies have work to do in improving public acceptance of their AI-driven tech. What does that mean for Pittsburgh?

Artificial intelligence and facial recognition.

(Screenshot of a scene from "Coded Bias" via YouTube)

Pittsburgh’s tech industry is built on a foundation of artificial intelligence. But a new report from the Pew Research Center found that the general population’s openness to AI technologies is balanced by some notable concerns.

Last week, the nonpartisan fact tank (as the center calls itself) published a 164-page report detailing the results from surveys of over 10,000 Americans in the first week of November last year. The surveys focused on understanding participants’ views of six significant development fields of artificial intelligence: facial recognition technology, algorithms used by social media companies, autonomous vehicles, computer chip implants for the brain, gene editing and robotic exoskeletons.

“Fundamentally, caution runs through public views of these AI and human enhancement applications, often centered around concerns about individuals’ autonomy, unintended consequences and the amount of change these developments might bring for humans and society,” a press release on the report noted.

Many respondents demonstrated an interest and excitement in the new tech alongside those concerns, with 18% of respondents saying they were more excited than concerned about AI and 45% saying they were equally excited and concerned. But the report’s findings underscore a number of questions around AI applications that leading tech companies will have to address.

Read the report

Autonomous vehicles in particular, stood out in the level of concern expressed by survey respondents. Americans were more likely to say that a widespread use of autonomous vehicles would be bad rather than good for society. Specifically, 44% of respondents said it would be a bad idea, while 29% said they weren’t sure and only 26% said it would be a good idea.

Advertisement

That’s bad news for the several Pittsburgh-based autonomous vehicle companies like Aurora and Argo AI that are planning to or already have launched pilot programs without human drivers in the car. These latest findings also don’t align with the more positive reviews shared in reports like one from Motional last year.

Soon, many of those companies hope to make autonomous vehicles commercially available for ride hailing, deliveries and other services. To prepare for that, several have released public updates to their safety protocols as an effort to keep the public in the loop about how exactly this disruptive tech will integrate into society. But even if local governments give them the OK to do that, the findings of the latest Pew report suggest that may not be enough to generate the revenue many companies currently expect.

Overview of findings from Pew Research Center’s recent AI report. (Screenshot)

The other findings from Pew’s report don’t yet have direct ties to Pittsburgh’s main industries — but there’s potential for that to change. Local life sciences startups like Krystal Biotech and Peptilogics use artificial intelligence for rare disease gene therapy and peptide therapeutic discovery, respectively. And companies like Humotech and RE2 Robotics have put efforts toward the development of robotic exoskeletons.

Of the three human enhancement applications the Pew report considered, these two are the most publicly accepted so far, with 30% of respondents feeling that gene editing is a good idea for society and 33% feeling that way about robotic exoskeletons.

Local government has also used facial recognition technology and algorithms for a variety of programs, including the city’s police department. While those initiatives have since been suspended or regulated, there remain hopes to mitigate their pitfalls use them for their benefits in the future. Still, the public has yet to catch up with that hope. Only 46% of respondents said that facial recognition tech used by police would be good for society while only 38% said use of algorithms by social media companies to find misinformation would be good for society.

Interestingly, one of the trends that seemed to drive response patterns in the report was awareness. For autonomous vehicles specifically, those who had previously heard of the technology were more likely to find it as a good idea for society than those who had not previously heard of it. And in a reverse of that, those who had an awareness of facial recognition tech use by police departments were more likely to find it as a bad idea for society than those who didn’t have a previous awareness of it.

Age and education level similarly influenced respondents. Young and more educated participants leaned toward thinking that disinformation algorithms and autonomous vehicles were good ideas for society while facial recognition tech for police departments was a bad one. Conversely, older and less educated participants were more likely to favor facial recognition use as a good idea, and the other two applications as bad ones for society.

More than anything, the findings of this report show that while companies may be ready to bring AI-driven products and services to market, the public is not yet ready to adopt them. Similarly to disruptive tech of past decades, the onus will be on researchers, corporations and governments to actively demonstrate productive uses of artificial intelligence in the 21st century.


Sophie Burkholder is a 2021-2022 corps member for Report for America, an initiative of The Groundtruth Project that pairs young journalists with local newsrooms. This position is supported by the Heinz Endowments. -30-
Subscribe to our Newsletters
Technically Media
Connect with companies from the Technical.ly community
New call-to-action

Advertisement