Diversity & Inclusion
Philadelphia Journalism Collaborative

Dance challenges, deepfakes and discrimination: The deal with AI and personal safety

When you consider some of the more high-profile ways artificial intelligence has gone rogue, it might leave you wondering if the technology is truly ready for prime time.  There was the time an AI chatbot professed its love for a journalist and tried to get him to leave his wife. Or when a different AI […]

An image generated using AI to represent students in Philadelphia engaging with technology

When you consider some of the more high-profile ways artificial intelligence has gone rogue, it might leave you wondering if the technology is truly ready for prime time. 

There was the time an AI chatbot professed its love for a journalist and tried to get him to leave his wife. Or when a different AI chatbot designed to help small businesses in New York City advised business owners to break the law and ignore worker protection rules. 

There are more — we’ll take a look at examples from dance challenges to deepfakes to outright discrimination — but plenty of Philly residents are actually feeling optimistic about the whole AI situation.

Somewhere between Skynet and ‘So what?’ 

The team at Love Now Media conducted interviews with Philadelphians to get a sense of where the average person stood on the issue of artificial intelligence.

Patrick, a city resident who regularly uses AI to write scripts to automate web tasks, said he thinks it’s a tool that should be explored by everyone, including government officials — but it could have a downside. 

“The extinction of the human race? You know, the probability of doom is not zero,” Patrick said jokingly. This attitude starts to seem less dystopian when you consider General Motors recalled 950 of their Cruise line of self-driving cars last November after a crash involving a pedestrian

Do the pros outweigh the cons? “I think it’s irrelevant,” Patrick said. “It’s coming, it’s enormous. And yeah, it can be used for good or bad and it’s really hard to say.” 

“It only gives you what you asked … it’s not going to give you anything extra.”

Other interviewees were more optimistic. Asked whether AI is something that poses a threat, Pranav, who uses the tech to write emails and refine his resume, said no. 

“I think it depends on how you use it. I don’t think it’s specifically a threat right now,” Pranav said. “It only gives you what you asked … it’s not going to give you anything extra. So I don’t think that’s dangerous.” 

Of a dozen or so interviews, most responses were somewhere along this continuum, with a lot of “yes and no” and “it depends on how you use it.” Understandably, the responses also aligned with each interviewee’s particular connection to the topic. 

Eric is a graphic designer who was passionate about not seeing AI-generated content usurping the work of human artists. (Adobe was recently called out for selling AI-generated art using the name of famed photographer Ansel Adams.) 

Maisha uses AI like it’s her assistant that helps her “do extra stuff that I don’t have to go to other people for.” However, she did think that there were some “ethical lines involved” in the use of artificial intelligence by law enforcement, and “that they would have a little bit more like regulation around what they could possibly do with the use of AI.” 

Police departments across the country have found themselves struggling with this very issue. 

Deepfakes reach new depths

Romance scams have been around far longer than any of us have been alive, and according to the Federal Trade Commission, cost consumers $1.14 billion in 2023. 

If it were up to the thousands of students I speak to regularly about digital citizenship and online safety, the surefire solution to avoid being catfished would easily amount to “Just FaceTime them, so you know they’re real.” 

But what happens when even the video call is fake? An alleged loosely organized Nigerian crime ring known as the Yahoo Boys has taken this type of scam to new heights (or lows) using deepfake AI technology to impersonate other people over video chat. The software can add a digital mask over the scammer’s face that can track facial movement and expressions in real time. In some cases, the scammer’s voice can be modified as well. 

They’ve also used the tech to create fake social media accounts intended to lure minors through sextortion, which escalates the scam even further by “blackmailing individuals with the threat of publishing sexually explicit images unless certain demands are met.” 

The proliferation of artificial intelligence tools that are readily available to easily create deepfakes can make anyone a victim in these types of romance scam. And whether you end up losing your life savings in a situation similar to this Utah case from January, or suffer far worse consequences like teenager Jordan DeMay who died by suicide after becoming a sextortion victim, it’s not hard to see how the misuse of this technology could constitute a real threat to personal safety.    

Faulty facial detection in Detroit

Across the country, people falsely accused of crimes because of facial recognition technology were overwhelmingly people of color. Because of its potential for bias, discrimination, and wrongful identification, at least 21 jurisdictions nationwide have banned law enforcement from using the tech. But police departments in some of those places (like Austin and San Francisco, per the Washington Post) are getting around the ban by asking police in neighboring districts to run searches for them.

Detroit holds the dubious distinction of being the US city with the most false arrests due to facial recognition: three, including an 8-months-pregnant woman who was handcuffed as she was getting her children ready for school. The city has since put reforms in place to prevent errors like this from happening again.

Artwork at Quorum in University City (Danya Henninger/Technical.ly)

The Philadelphia Police Department, which recently came under fire for overreliance on video surveillance footage. The PPD did not pursue a contract with controversial facial recognition software provider Clearview AI after piloting it back in 2020, but current departmental policy allows trained personnel to use facial recognition in investigations.

From December 2022 to December 2023, SEPTA Police ran a pilot with ZeroEyes, a Conshohocken-based startup that uses AI for gun detection, but the transit authority chose not to extend the program — not because of privacy concerns, but because SEPTA’s camera systems were too outdated to make good use of it.

Meanwhile, the Phillies have implemented facial recognition for entry into Citizens Bank Park, and PHL Airport is using facial recognition screening at certain security checkpoints. 

Out of seven federal agencies using facial recognition technology (including the FBI and the Department of Homeland Security), only three had guidelines intended to protect civil liberties, per a March report from the U.S. Government Accountability Office. All seven started using the technology without first requiring staff to complete any training to ensure its proper use.

Even dance challenges aren’t safe

One of the latest dance challenges on TikTok is the #buckingchallenge, where creators perform a short piece of choreography to a mashup by producer/DJ Jacob Dior of the Beyoncé song “Sweet Honey Buckiin” with “Still Tippin” by Mike Jones. 

The first time I saw the dance challenge on my “For You” page, I was legitimately struck — by the text on screen. TikTok’s AI-generated auto captions had mislabeled the word “bucking” with another word that sounds similar but starts with the letter “F.” And because the snippet of the song being used has the word “buckin” on repeat almost exclusively, I watched f-bombs get dropped continuously across my phone screen for the length of the video.

I’ve seen variations of the challenge on my feed with those same captions in place no fewer than a hundred times now, and while that was a minor inconvenience, I couldn’t help but think about the potential impact to others, particularly young children. I also thought about the kind of people who seek out and watch videos of kids online for more nefarious purposes and how their sensibilities might be affected.

 As someone who’s been speaking to students and parents about the potential dangers of social media for almost 15 years, I know the extrapolation isn’t a reach. The larger issue though, is that not only did TikTok’s AI get this translation wrong, it also hasn’t caught the word in the captions when its use by a creator could constitute a violation on the platform in the first place.

I don’t believe we’ve approached Terminator-level threat-to-humanity AI scenarios yet. However, there are no shortage of examples that show artificial intelligence has enough potential to cause harm. 

Joy, one of the Love Now Media’s interview respondents, summarized the question of whether AI would benefit us or pose a risk to society in this way: 

“It’ll keep benefitting us the way that it does now, but if you ever let something have too much power, you never know what can happen”

Before you go...

Please consider supporting Technical.ly to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.

Our services Preferred partners The journalism fund
Engagement

Join our growing Slack community

Join 5,000 tech professionals and entrepreneurs in our community Slack today!

Trending

Philly’s tech and innovation ecosystem runs on collaboration 

Look inside: Franklin Institute’s Giant Heart reopens with new immersive exhibits

Robot dogs, startup lawsuits and bouncing back from snubs: Philly tech’s biggest stories of the year

How Berkadia's innovation conference demonstrates its commitment to people and technology

Technically Media