Should readers know if a news story was written by AI? The ethical challenge of modern journalism

Share:

Should a newspaper reader know that what they are reading was written with the help of artificial intelligence (AI)? Can journalists guarantee that a text generated with this technology does not contain sexist biases? These and other ethical dilemmas will become increasingly common in media newsrooms, which need to innovate while maintaining journalistic principles. In fact, the ethical dimension of the emergence of AI is one of the most important challenges for the media due to their social commitment and their service to democracy .

AI is present in more and more areas of our lives, including the information we constantly receive from the media. Many journalists already use these tools for all kinds of tasks, including generating text, images, and sound.

Thanks to what is known as generative AI, professionals can write headlines, summarize, translate texts, transcribe interviews, search for information to add context to the news they write, create graphics and images, and even generate news automatically.

Ethical codes

Aware of the potential implications of AI, some media outlets have begun creating their own ethical codes, a kind of guide with guidelines to help journalists use it responsibly. However, although half of newsrooms already use generative AI, only 20% have such codes, according to a global survey .

In our study , we located 40 of these documents. These documents, in addition to containing guidelines on how to use AI in journalism ethically, are publicly accessible to maintain user trust. They belong to a total of 84 media outlets, news agencies, magazines, media groups, and media alliances from 15 countries across Europe—including Spain—the Americas, and Asia.

What can academics contribute?

Researchers from academia can indirectly contribute to the implementation and successful use of AI in newsrooms. Proof of this is that 90% of journalists participating in a global survey view universities as having a greater role in this process. In this way, media outlets would benefit from their knowledge , research, collaboration, and critical view of this technology.

However, some studies show that connections between journalists and academics are often poor. In fact, as our study shows, there are academic proposals for the ethical use of AI in journalism that existing codes do not address, likely due to a lack of awareness.

Here we present a summary that combines the academic proposals and guidelines from the professional documents examined in the study, as well as some guidelines on how to implement them.

  1. Follow the principles of accuracy and credibility . Verify the accuracy and credibility of the information provided by AI; simplify the process by which users can report errors resulting from the use of AI to the media.
  2. Improve accessibility . Make news stories adaptable to different platforms, improve their readability, and ensure that the style of automated texts is similar to the rest.
  3. Offer relevant content . Use AI not only to find popular and trending topics, but also to make the information published meaningful to people’s lives; AI-powered content personalization and recommendation services should not be detrimental to the public interest.
  4. Promote diversity . Show social diversity, perspectives, and points of view; avoid stereotypes and biases.
  5. Ensure transparency . Indicate when an algorithm was used to create a news piece and let the user know whether they are interacting with a human or an AI.
  6. Ensure responsible data and privacy management . Data providers must have the legal right to provide their data to journalists, and journalists must have the right to process and publish it; collect only the necessary personal data; anonymize irrelevant information and ensure secure storage of databases; assess whether it is worthwhile to share private and potentially competitive audience data with third parties; and users of conversational AI must be able to decide what data is collected, what it is used for, where it comes from, and how it is shared.
  7. Empower human presence . Regularly review the algorithm to prevent it from writing out-of-context information; consider the potential negative implications of delegating editorial decision-making to algorithms.
  8. Have interdisciplinary teams . Have teams that combine technical knowledge and ethical principles, and that investigate how AI can advance the principles that govern journalism.

These eight recommendations can be useful for media outlets looking to create their own codes of ethics or improve existing ones.

Social function of journalism

As media outlets and their professionals continue to take steps toward integrating AI tools into their journalistic work, they will increasingly face ethical dilemmas that are not always easy to resolve.

Sooner or later, everyone will need to take steps to guide journalists in the responsible use of AI without losing sight of the social function that good journalism serves. And in this task, researchers have much to contribute.

Author Bio: Sonia Parratt Fernández is Professor of Journalism at Complutense University of Madrid

Tags: