The Guardian newspaper’s use of AI.

  • Post author:
  • Post category:Blog
The Guardian has always embraced technological change.

In the early years, trains, telephones and the telegraph all transformed the operation of getting news to readers. In the 20th century, we championed new possibilities, whether it was colour photography, computerised production, or bold new design.

We were pioneers of the early internet in the mid-1990s, trailblazers when it came to podcasts and liveblogs, enthusiastic trendsetters in commentary and reader interaction, award-winners in video, data journalism and new digital forms of storytelling.

So what are we now doing with artificial intelligence? Well, we have a plan to handle it responsibly, and we were hoping this approach might persuade you to support our journalism.

It’s already clear that generative artificial intelligence (genAI) could be a big technological leap forward: a tool that could help journalists scrutinise big datasets, summarise complex situations rapidly, and discover things of use to the public; a tool that could reduce the time needed for painstaking, manual research, freeing up journalists to spend more time investigating and joining the dots. So far, so good.

But genAI tools are as yet unreliable, often producing completely invented output, or fabrications. The technology could generate a dozen or more news stories an hour, but of highly questionable quality; it could scrape the intellectual property of others and repackage it into something apparently original but ultimately exploitative.

There are worrying implications for journalism and wider society: what will happen to the quality and veracity of news and information if tech platforms integrate generative AIs into functions like search engines? If people are taken in by the lies these tools sometimes generate, will they bother visiting reliable news websites at all? Will the underlying biases of genAIs and their training models in turn skew the flow of information in the public space?

 
For months we have been grappling with these questions. A working group of our journalists and digital experts has been considering how the Guardian responds to the risks and opportunities of the AI era.

Broadly, there are three. First, any use of genAI must have human oversight. The Guardian will remain a champion of journalism by people, about people, for people. Gen AI tools will only be used when there is a clear and obvious case for them, and only with the express permission of a senior editor. We will be open with our readers when we do this.

Second, any use of genAI will focus on situations where it can improve the quality, not the quantity, of our work, for example helping interrogate vast data sets containing important revealing insights, or assisting our commercial teams in certain business processes.

Third, to avoid exploiting the intellectual property of creators, a guiding principle for the Guardian will be the degree to which genAI systems have considered copyright permissioning and fair reward.

Like other technologies before it, generative artificial intelligence will create risks and challenges, but this isn’t a reason to reject it out of hand. Nor can we ignore the impact it will have on society. We want to work with engineers who seek to design and build these technologies in a responsible and cautious way.

We know our readers and supporters relish the fact that the Guardian’s journalists are humans, reporting on and telling human stories.

That must never change.

Thank you,

Katharine Viner
editor-in-chief
Guardian News and Media