While some newsrooms sue tech companies using copyrighted content to train artificial intelligence models, others have embraced these bots to serve their own readers.
The Washington Post introduced its own AI tool for users in November. Called “Ask The Post AI,” it gives readers streamlined information about the news that the legacy outlet publishes.
“Users have adapted to Google-style searches, often asking questions in a prompt instead of a simple query,” head of product and design Gitesh Gohel wrote to Technical.ly. “As generative AI and the rise of conversational formats present opportunities for us to delight and inform readers in new ways, ‘Ask The Post AI’ was designed to allow readers to explore even more of our published reporting.”
Here’s how it works in theory: If a reader types in, “Who is the president?” the bot will generate an answer based on cited articles. It’ll also list more news pieces sorted by relevance, going back to 2016. Depending on the question, the bot could generate a sentence-long answer or a short paragraph.
The launch of the Washington Post’s general bot followed the release of a “Climate Answers” tool, which specifically answers reader questions about climate change, over the summer of 2024. That development informed building the second bot, and both AI tools were built on the same large language model, Gohel said.
“Our approach to AI products is that one size does not fit all,” he said. “‘Ask The Post AI’ expands the interface to allow readers to explore even more of our published reporting.”
The general AI bot is layered over the traditional search engine on the Post’s site, per Gohel. If there is not a certain amount of information available to pull from, the bot will not generate an answer to a user question, he explained.
The Post previously announced that this tech was being developed in partnership with Virginia Tech. Despite what the Post’s head of data and AI Sam Han told Technical.ly at that time, Gohel said the university wasn’t involved in building this specific bot, although the Post aims to implement this collaborative work into a future experiment.
If a reader enters a simple term, an AI summary will not be generated. This was changed after receiving feedback from users, per Gohel.
“As we collected feedback from users, we realized the summary was less useful when users were submitting queries that weren’t framed as a question or prompt,” he said.
AI is not a ‘bulletproof technology’
When I used the tool on Jan. 31, I asked the bot to name the president, and the result incorrectly stated, “The current president of the United States is Joe Biden.”
Also, when I asked “Who was involved in the Cold War?” it listed Vladimir Lenin, even though the Soviet Union’s founding father died long before the conflict’s generally accepted start after World War II began.
The news outlet clearly states the bot is an experiment and encourages the reader to verify the information since “AI can make mistakes.”
Transparency is key to introducing new technology in journalism, explained Ben Reininga, the Nieman-Berkman Klein fellow for journalism innovation at Harvard University. AI is not a “bulletproof technology,” and it’s key to let the reader know as much.
There are valid fears about introducing AI at news outlets, and the technology’s capabilities in 10 years remain unclear, he noted. There is default skepticism when established news organizations embrace new technology, but journalists and the outlets they write for need to incorporate innovation. If news outlets didn’t embrace the internet 20 years ago, more outlets would be out of business, he said.
“I’m old enough to remember when a print publication embracing just having a website was considered the ‘End Times,’” Reininga told Tecnical.ly. “These technologies are coming. There is also a good impulse to try to adapt, and figure out the best ways to use AI as a responsible journalistic tool.”
Reininga did not directly comment on the Post’s AI search tool, but said that newsrooms using AI need to maintain basic tenets of journalism — for example, citing sources clearly and making good faith attempts to be accurate, for example. It should also be a complement to reporting and the work of human journalists, he said.
“If it [AI bots] can help make information more accessible, I think that could be a good use of it,” Reininga said. “It obviously just has to exist within the parameters of good responsible journalism.”
Before you go...
Please consider supporting Technical.ly to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.
Join our growing Slack community
Join 5,000 tech professionals and entrepreneurs in our community Slack today!