Civic News
Big Tech / Ethics / Media / Nonprofits / Social media

Addressing the spread of online hate speech will require more than just tech

Experts told Technical.ly most social media companies are stuck in a cycle of reacting to hate speech after atrocities, not taking steps to prevent them. Here's what can be done to mitigate the digital spread of bigotry.

Social media. (Image via Pixabay)

This editorial article is a part of Universities Month 2023 in Technical.ly’s editorial calendar.

Hate speech isn’t a new phenomenon, nor a uniquely American problem. Yet in the 21st century, with the omnipresence of social media platforms and ever-evolving forms of technology, hate speech and extremist groups committed to spreading bigotry have an easier time making the rounds.

According to most experts, what hasn’t changed through the decades is that hate speech can often be a precursor to large-scale violence. Meanwhile, large companies tend to take a reactive approach to dealing with hate speech.

What can be done to curb the spread of online hate speech?

Acting and reacting

As someone who’s been tracking hate speech for years, Fifth Tribe CEO and Georgetown University professor Khuram Zaman told Technical.ly that in his work as a digital strategist, he’s observed a cycle. When hate crimes occur and the public is horrified, most websites crack down on users posting hateful content. This crackdown tends to be temporary, though.

“When some big incident happens in the news, like there’s a violent attack or a mass shooting, all these companies take action and make a lot of fanfare,” said the DC-based Zaman. “Then after a while, the problems are still there and things kind of go back to the way things were” — typically once an event is no longer in the news cycle.

However, in some cases, Zaman has seen instances where not all digital forms of bigotry and or xenophobia have been deemed harmful enough to warrant removal. In the aftermath of the 2017 New York City Truck Attack, Zaman recalled that his company built a tool called No Islamophobia inspired by NoHomophobes.com to track anti-Muslim sentiment on Twitter. The finding was that although the platform was vigilant about removing ISIS content, it was content to allow Islamophobia to remain.

“When people were making death threats or inciting violence against Muslims, that stuff just kind of stayed there,” Zaman said. “We thought that was kind of an interesting sort of inconsistency.”

Since then, Fifth Tribe — which offers nonprofits, companies and government agencies support with digital marketing and other technical services — collaborated with two other organizations to track individuals involved in incidents such as Jan. 6, and the Charlottesville Unite the Right rally that resulted in one protestor’s death and dozens of injuries. Although years had passed, they realized online Islamophobia had gotten worse, to the point where it’d spread to other countries such as India where Muslims were a minority group.

Within just one day of the company’s change in ownership, Twitter saw a 500% increase in the use of racial slurs as well as antisemitic and misogynistic language.

“The kind of rhetoric that you’re seeing, it’s very alarming to the point where people are using ethnic cleansing-type of language,” Zaman said.

As opposed to previous years when a change in the news cycle led to platforms easing up on restrictions, Zaman observed that Elon Musk’s October 2022 purchase of Twitter had further complicated matters. In the name of free speech, Musk made a public show of reinstating the likes of former President Donald Trump and prominent white nationalists such as Nick Fuentes.

“All these bad actors or extremists [and] racists, they’re all back on Twitter,” Zaman said. There’s been “a huge increase in hate speech. So the bad guys are back on some of these platforms, and then the good guys, the people that are doing research, they’re creating obstacles for them.” One such obstacle: Twitter API, a tool academics and journalists relied on for tracking misinformation and hate speech, is no longer a free resource.

Meanwhile, within just one day of the company’s change in ownership, Twitter saw a 500% increase in the use of racial slurs as well as antisemitic and misogynistic language.

Hate on new media

Closer to home, Michael Miller Yoder, a postdoctoral researcher at Collaboratory Against Hate, has observed similar trends. The Collaboratory is a joint research institute from the University of Pittsburgh and Carnegie Mellon University focused on discovering the roots of extremism and discovering intervention tools to combat it. Yoder explained that although TikTok is newer, many of the issues prevalent to Facebook and Meta can be found on TikTok, but in unexpected ways.

One of the Collaboratory Against Hate’s research teams “were often finding that there were videos and images that would sort of look benign, but then people would attach racist messages to them,” Yoder said, or observe “people trying to use popular hashtags to draw attention to racist audiovisual content.”

An example of this occurred in 2020 via the George Floyd Challenge, in which users were encouraged to reenact the death that led to a summer of protests. Despite the challenge having had only a handful of participants, it still managed to go viral. Yoder called it an example of bad actors using trending topics to slip through filters and cause harm.

Within the Collaboratory Against Hate, there is a team dedicated to designing educational interventions for parents and teachers. But Yoder said technology isn’t the only solution: It also remains important to continue addressing the kinds of inequalities that lead to online hate speech.

Michael Miller Yoder. (Courtesy photo)

Due to TikTok’s young user base, instances of racist content on the platform are especially concerning, yet Zaman said it’s not only the young who are vulnerable to hateful content and or misinformation. Even if a person isn’t necessarily a member of a group registered with the Southern Poverty Law Center, a legal advocacy organization that monitors hate groups throughout the country, in the absence of fact-checking, they can still fall prey to conspiracy theories found online.

Another thing that can regrettably trend, Yoder said, is which minority groups extremist groups choose to focus their ire on. When it’s socially acceptable to engage in anti-Black rhetoric or antisemitism, he explained, hate speech against those groups, in particular, is what’s most likely to be found online. When hate toward another group goes mainstream, the focus will change.

“The target of, for example, white supremacist groups can actually change depending on time period,” Yoder said. “Some of our partners like the Southern Poverty Law Center, and Anti-Defamation League, they’ve been noticing more anti-LGBTQ+ hate, particularly anti-trans hate [in] a lot of extremist and extremists online spaces.”

Fighting a ‘cancer’

Yoder acknowledged that since a lot of factors in the real world and online contribute to the issue of online hate speech, it will take the help of companies, nonprofits and universities to combat it. Zaman and Yoder observed that content moderation can go a long way on these platforms.

“The sort of default policy toolkit right now is content moderation, and that’s often just removing content and removing users,’ Yoder said. “It’s not perfect, but it can be effective.”

Similar to the problem of misinformation, both experts noted that online hate speech and bigotry was a problem that needed to vigilantly address no matter who was in office or what events made the news, or it would continue to spread with dire consequences.

“[This] is more global than we think it is, and it’s here to stay. White supremacists have gotten emboldened since Charlottesville,” Zaman said. “And it’s just a problem that’s continuing to be like a cancer in our society, it’s continuing to metastasize, and we ignore at our own peril.”

Atiya Irvin-Mitchell is a 2022-2024 corps member for Report for America, an initiative of The Groundtruth Project that pairs young journalists with local newsrooms. This position is supported by the Heinz Endowments.
Companies: University of Pittsburgh / Carnegie Mellon University / Facebook / Twitter
Series: Universities Month 2023
Engagement

Join the conversation!

Find news, events, jobs and people who share your interests on Technical.ly's open community Slack

Trending

Where to watch the April 8 solar eclipse in Pittsburgh

How venture capital is changing, and why it matters

What company leaders need to know about the CTA and required reporting

The ‘Amazon of science stores’ and 30 other vendors strut their stuff for Philly biotech

Technically Media