×
BBC threatens legal action against Perplexity for unauthorized content use
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The BBC has threatened legal action against US-based AI company Perplexity, accusing the firm of reproducing BBC content “verbatim” without permission through its chatbot. This marks the first time the world’s largest public broadcaster has taken such action against an AI company, highlighting escalating tensions between media organizations and AI firms over unauthorized content use.

What you should know: The BBC sent a formal legal letter to Perplexity CEO Aravind Srinivas demanding immediate cessation of BBC content use, deletion of stored material, and financial compensation.

  • The letter states this “constitutes copyright infringement in the UK and breach of the BBC’s terms of use.”
  • BBC research published earlier this year found four popular AI chatbots, including Perplexity AI, were inaccurately summarizing news stories and BBC content.
  • The corporation argues such misrepresentation damages its reputation with audiences, including UK license fee payers who fund the BBC.

The bigger picture: This legal threat emerges amid growing scrutiny over AI companies’ web scraping practices and unauthorized use of copyrighted content for training generative AI models.

  • Much of the material used to develop generative AI models has been extracted from web sources using automated bots and crawlers.
  • British media publishers have joined calls for the UK government to uphold protections around copyrighted content.
  • Many organizations, including the BBC, use “robots.txt” files to block bots from extracting data, but compliance remains voluntary.

How Perplexity operates: The company describes itself as an “answer engine” that searches the web, identifies trusted sources, and synthesizes information into responses.

  • Perplexity claims it doesn’t build foundation models and therefore doesn’t use website content for AI model pre-training.
  • The company advises users to double-check responses for accuracy, a common caveat for AI chatbots known to state false information convincingly.
  • CEO Aravind Srinivas previously denied accusations that Perplexity’s crawlers ignore robots.txt instructions in a June interview with Fast Company.

What the BBC alleges: The broadcaster claims Perplexity is “clearly not respecting robots.txt” despite the BBC disallowing two of the company’s crawlers.

  • BBC research found “significant issues with representation of BBC content” in Perplexity AI responses.
  • The output allegedly falls short of BBC Editorial Guidelines around providing impartial and accurate news.
  • Such misrepresentation undermines trust in the BBC among its audience, the corporation argues.

Recent precedent: In January, Apple suspended an AI feature that generated false headlines for BBC News app notifications after BBC complaints, demonstrating the ongoing challenges AI systems face in accurately processing news content.

BBC threatens AI firm with legal action over unauthorised content use

Recent News

How Walmart built one of the world’s largest enterprise AI operations

Trust emerges through value delivery, not training programs, as employees embrace tools that solve real problems.

LinkedIn’s multi-agent AI hiring assistant goes live for recruiters

LinkedIn's AI architecture functions like "Lego blocks," allowing recruiters to focus on nurturing talent instead of tedious searches.

Healthcare AI hallucinates medical data up to 75% of the time, low frequency events most affected

False alarms vastly outnumber true positives, creating disruptive noise in clinical settings.