Telegraph Puts Tough Restrictions On Use Of ChatGPT

The Telegraph has officially communicated its policy regarding the use of generative AI to its staff, emphasizing a “permissive” stance for employing the technology in “back office” tasks. However, the newspaper has imposed strict restrictions on incorporating AI-generated text into articles, allowing such usage only in limited circumstances requiring approval from top editors and the legal department.

The guidelines unveiled concerns within the Telegraph Media Group regarding both legal and editorial risks associated with utilizing AI for editorial purposes. The fear includes the potential emergence of sensitive information entered into chatbots elsewhere. In contrast to other major UK and US publishers like The Guardian, Financial Times, BBC, Associated Press, and Reuters, which have publicly shared their guidelines for using generative AI, The Telegraph appears to be more cautious.

The managing editors of The Telegraph circulated the policy to staff, emphasizing its broad and high-level nature, with plans for more specific guidance in the future for various business use cases. While acknowledging AI’s increasing value as a tool for the business, the editors cautioned journalists about the fundamental challenge it poses to the relationship with readers, emphasizing the need for accountability and clear attribution of content.

Expressing concerns about the undisclosed data used to train large language models (LLMs) by generative AI companies like OpenAI, the Telegraph editors highlighted the risk of plagiarism due to reliance on content from multiple sources. Journalists submitting copy generated by ChatGPT would face sanctions akin to plagiarism. The only permissible instances for publishing AI-generated copy involve illustrating a piece about AI, subject to approval by specific editors and legal clearance.

To ensure transparency, any AI-generated text or images must be clearly signaled to readers, and rights for using the content must be verified with the editorial legal team. Given the potential for generative AI to produce false information, journalists are instructed to assume such information is false by default and are held accountable for any output based on AI-generated content.

The guidelines strictly prohibit the use of generative AI tools for copy editing due to concerns about incorrect information and a lack of understanding of the Telegraph’s style. Despite these restrictions, a more pragmatic and permissive approach is adopted for “back office” activities, such as generating story ideas, suggesting headlines, and assisting in research, with employees responsible for critiquing and ensuring the coherence and relevance of AI-generated suggestions. The policy reflects a cautious approach to the opportunities and challenges posed by generative AI in journalism.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.