Civic News

How to spot misinformation and bots on social media in the age of generative AI

Leading up to Election Day, you can’t believe what you read, see or hear on social media — and trolls get more advanced each day.

Social media sites like X can be used to spread election misinformation (AP Photo/Darko Vojinovic, File)

Bots are everywhere this election season, and identifying them is harder than ever.

Once you know what to look for, though, you’ll start to notice it everywhere. Drop in on a heated political thread on social media, and you might see signs of bot activity, such as different users repeating similar phrases over and over, or a mass false “debunking” of something that might hurt whatever candidate they’re defending. 

What you won’t see much, however, are the telltale signs that an account is a troll bot. 

Back in 2016, when foreign accounts posing as Americans on Twitter were attacking Hillary Clinton, they were often brand new accounts with no followers, no photo and account names made up of a long string of digits. They posted memes but couldn’t interact with human users — unless they were actually paid humans following scripted arguments.

People were far less bot-savvy in the 2010s, and the bots were less sophisticated.

As the 2024 election quickly approaches, election disinformation has become so ubiquitous that it’s impossible to stop. You may think you can easily recognize a piece of AI-generated propaganda, but the tech is continuously becoming more human-like — and it’s more accessible than ever, making this a first-of-its-kind event.

AI fakes political endorsements and spreads dangerous rumors

Many people, at least social media users, know a 2010s-style bot when they see it and ignore or block it. 

Bots today, though, are much harder to recognize. They often look real, with followers, profile pictures and good grammar. More importantly, they can respond to people, are trained to use specific talking points and don’t give up when challenged.

“It’s easier to make it look like the message comes from human at scale. And it’s actually not that expensive.”

Antoine Vastel, DataDome

Social media disinformation, online political polarization and the use of social media to organize real-world violence harms democracy, according to research by Lance Y. Hunter, a professor of International Relations at Augusta University. He researches political behavior at the intersection of technology, artificial intelligence, social media and cybersecurity. 

“Social media can lead to much more polarized environments, where there’s more anger and hostility between different groups and individuals with different political ideologies,” Hunter told Technical.ly. “Sometimes this can lead to more political violence.”

The digital warfare concern since 2016 has expanded to include a potentially equally dangerous domestic disinformation threat, now fueled by generative AI. 

Such campaigns include spreading racist disinformation about immigrants eating household pets and faking celebrity campaign endorsements. Today, AI spreads disinformation across social media platforms daily, potentially impacting elections in 50 countries.

Bots appear more human as tech advances

Every day, new AI images flood social media. Many are identifiable fakes, but with the fast evolution of generative AI tools, some are harder to identify as fake, especially for the average user. 

Some of the “tells” of AI-generated images, such as garbled text, wonky hands and a general uncanny valley feel, are no longer an issue with some of the current creators. 

Even more convincing is generative AI’s ability to replicate a human voice.

“During the presidential primary in New Hampshire a few months back, there was actually an AI generated piece of audio that was mimicking Joe Biden telling voters in New Hampshire not to go to the polls,” Hunter said. “This would have fooled us.”

Still, the most powerful driver of disinformation in 2024 is text, and the combination of generative AI and bots is especially potent. 

An entity creating a bot campaign just a few years ago would have to rely on humans to make convincing posts that could be interacted with. Now, generative AI and bots can create human-like posts that are more advanced, sometimes spanning paragraphs, and they can do it in virtually no time. 

Creating disinformation bots just got easier, too

Just a couple of years ago, creating bots that could spread disinformation required things like the ability to code.

But bot campaigns have become less and less difficult to create, according to Antoine Vastel, vice president of research for DataDome, a New York City-based software company that specializes in bot management software.

“It’s easier to make it look like the message comes from human at scale,” Vastel said. “And it’s actually not that expensive.” Bot platforms that allow you to build bots without any special skills can cost as little as $20 a month.

Combine that kind of accessibility with a social media platform like X, formerly Twitter, that has recalibrated to allow for the most extreme speech, including disinformation, bigotry and calls for violence, and you have potentially enough instability for disinformation to come out on top. 

While X is the most notorious for allowing extreme speech since billionaire Elon Musk purchased Twitter in April 2022, other more moderated social media platforms like TikTok, Facebook, Instagram and Reddit are also conduits of disinformation and bot activity. 

“One of the things I think is important for the average person to do is, if they see something that they’re susceptible about and they’re not sure about, just give it time to be vetted,” Vastel said. “Give the experts time to determine if it’s that misinformation or not.”

If you’ve ever stepped into a particularly heated thread on X, that can be a challenge. Everybody — bot and human — presents as an expert. Accounts declare what’s “debunked” if they might negatively impact their side with nothing to back it up, multiple accounts repeat the same phrases over and over until it saturates the ether. 

And it’s not just politics. Such tactics have been used to attack celebrities and royals and to defend criminals, such as a bot-boosted campaign to free a young man who killed two people in a street racing crash from his prison sentence after his mugshot went viral on TikTok.

Bot ID tools are less effective as AI evolves

For a long time, online tools like Bot Sentinel were used to help identify suspicious accounts. I’ve spent weeks feeding X usernames into these online tools in an attempt to track bots, or, at least, problem accounts. 

Even the most disruptive accounts come up as normal — something that was not the case a few years ago. X began banning these tools from the platform in 2022. Regardless, the parameters of online bot ID tools used even two years ago are useless now. The bots have outsmarted them, at least for now.

There are still “tells,” but you have to look at the behavior of potential bots collectively rather than individually, as with the repetition of phrases. 

Geographic information and timestamps can be useful for identifying where posts originated from — though, again, bot attacks can be both domestic and international. 

As generative AI, especially the visual kind, becomes harder to identify as AI, further transparency will be needed, Hunter said.

“I think it’s really important going forward to have watermarks required on AI-generated images and videos,” he said. That requires a lot of cooperation, from companies, social media platforms and, potentially, the law. 

Plus, there’s more to generative AI smart bots than spreading disinformation, Vastel said. They can take resources, commit credential-stuffing attacks and see user information. And we can’t rely on social media platforms to protect us.

“Twitter is not even able to stop bots posting obvious crypto scams,” Vastel said. “How can we expect them to stop more sophisticated bots that are trying to remain under the radar?”

Before you go...

Please consider supporting Technical.ly to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.

Our services Preferred partners The journalism fund
Engagement

Join our growing Slack community

Join 5,000 tech professionals and entrepreneurs in our community Slack today!

Trending

Millions of dollars pour into semiconductor manufacturing in Southwestern PA

Philly’s tech and innovation ecosystem runs on collaboration 

Do zero-waste takeout containers work? We tried a new DC service to find out

Baltimore's innovation scene proved its resilience in 2024

Technically Media