Despite promising case studies, there’s still a constellation of risks associated with generative AI tools. These include outstanding issues around accuracy, transparency, fairness, privacy and intellectual property infringement.
AI-generated stories have stirred up mass controversy for not only being poorly written, but also for plagiarism and factual inaccuracies. As more search engines deploy AI summaries, there are concerns that these features give the appearance of authority by including links to sources. But they can take facts out of context and give misinformation31. And depending on the source of their training data, AI models might amplify existing biases.
Although it’s easy to point fingers at chatbots like ChatGPT, the Reuters Institute 6 noted that news organization can’t easily circumvent this problem. Developing proprietary models in-house is challenging. Even the largest newsrooms might not have an archive large enough to supply all the training data that an LLM might need.
The best solution would be to fine-tune or prompt-tune existing models, but those methods can come with their own problems around safety, stability and interpretability.
Despite the impressive feats generative AI can perform, they ultimately lack a coherent understanding of the world32. As a result, AI cannot vet the quality of sources, and they can sometimes get tricked. For example, Wired33 found that Google, Microsoft and Perplexity’s AI products have been surfacing AI answers based on widely debunked race science because there’s a lack of high-quality information on the web. On top of that, AI models can hallucinate, and they’re still learning how to convey uncertainty.
Previously, publications published their data and code alongside work that was produced by using machine learning or AI. Now, there’s an even higher demand for algorithmic accountability and explainability—audiences want to know when content is being produced by AI34. Even then, some early studies have shown that audiences tend to trust news content less when it’s marked as AI-generated.
Journalism relies on a relationship between the writer and the reader. Maintaining trust is paramount. And as AI becomes increasingly used across different levels of news production, media companies are trying to be as transparent as possible in their disclosures.
In a guidance put out by The New York Times35 in May 2024, its editors said that generative AI will be used as a tool in service of their mission to uncover the truth and help more people understand the world. The technology is used with human guidance and review, and editors explain how the work was created and the steps they took to mitigate risk, bias and inaccuracy.
“The relationship between a journalist and AI is not unlike the process of developing sources or cultivating fixers,” as put by Columbia Journalism Review36. “As with human sources, artificial intelligences may be knowledgeable, but they are not free of subjectivity in their design—they also need to be contextualized and qualified.”
There is a trend toward more transparency in AI systems across various industries. However, companies are still negotiating the tradeoffs between more open source codes and security.