Artificial intelligence seems to be everywhere — in our social media feeds, Google search results, advertisements and more. But with so much AI-created content flooding our screens, how can we tell what’s real and what’s not?
For young people, learning to question how information is made (whether by humans or machines) has always been important. Now it has become essential. There’s a catch, though. Even adults admit they’re confused about how AI really works. That’s why it’s important for reporters and other trusted information providers to step in.
Helping people understand what AI is and how it’s being used around them is a new challenge journalists are taking on.
More than 80% of participants said it would be helpful if newsrooms offered tips to help people understand AI.
Research from Trusting News, a nonprofit organization that helps journalists build trust with their communities, found that audiences want journalists to educate them about AI. In one survey, more than 80% of participants said it would be helpful if newsrooms offered tips to help people understand AI and detect when it’s used in content creation.
Most people say they’re not sure when they’re seeing AI-generated images, videos or text and they’re often skeptical about what to trust online. That uncertainty makes it easier for misinformation to spread.
AI literacy is becoming an essential skill, much like learning to read or use the internet. If people don’t understand how AI influences the content around them, they risk falling behind. They may also be more likely to believe false information or fail to spot deepfakes that look and sound real.
What people say they want to learn
Trusting News worked with newsrooms to ask their news consumers their thoughts about AI. In community interviews, people shared specific ideas for how journalists and newsrooms could help, including:
- Hosting workshops or creating guides that explain AI’s role in news reporting.
- Offering interactive sessions where people can ask questions and learn how AI tools really work.
- Writing in-depth articles that show when and how AI is used in journalism.
- Keeping open conversations with the public about AI’s risks and benefits.
Even news consumers who already understood AI said these educational efforts could be beneficial.
To meet this growing need, Trusting News has launched several projects focused on AI literacy. Through newsroom cohorts and innovation grants, journalists are experimenting with creative ways to teach their audiences about AI.
Ten newsrooms are currently part of an AI literacy cohort, producing explainers and tracking how audiences respond. Another five are leading innovation grant projects that make AI education fun and accessible.
For example, LINK nky, a digital and print news organization serving communities in Northern Kentucky, is partnering with local libraries and organizations to host a series of free community trainings on AI and media literacy. USA TODAY is inviting local community members to share their personal stories and local histories using AI-powered tools in a guided, web-based experience. We Talk Weekly, a video and digital news organization based in Philadelphia, is launching a community-driven AI literacy project designed to make artificial intelligence understandable and accessible.
Newsrooms are also explaining AI by sharing information about how it’s being used in local communities and how it works. The Baltimore Banner explained how an AI chatbot is helping local students learn to read. Technical.ly shared a video explaining how large language models or LLMs work, comparing them to “super-smart talking parrots.”
These projects aim to make AI understandable and useful, especially for younger generations who will grow up surrounded by it.
The bigger picture
AI isn’t going away and that means developing AI literacy isn’t just about recognizing deepfakes. It’s about understanding how this technology shapes our world, from the media we consume to the jobs we’ll have in the future.
At Trusting News we believe journalists who take the time to explain and model ethical AI use are helping create a more informed public that’s equipped to make smart decisions about technology. When communities see journalists as trustworthy, useful sources of guidance and information through these changes, we believe it can strengthen both news organizations and communities.
The journalists working on these projects want to make sure you have the tools to find out. So, the next time you see a photo, video or article that seems a little too perfect, pause and ask yourself: Could AI have played a role here?

This story is published as part of the Baltimore News Collaborative, a project exploring the challenges and successes experienced by young people in Baltimore. The collaborative, of which Wide Angle Youth Media is a member, is supported by the Annie E. Casey Foundation. News members of the collaborative retain full editorial control.