AI Assistants Found Spreading Misinformation in News Responses

Date:

Share post:

Leading artificial intelligence assistants are generating misleading content, factual inaccuracies, and distortions when responding to questions about current affairs, according to a BBC study.

Key Findings of the Research

The study examined responses from four popular AI tools—ChatGPT, Copilot, Gemini, and Perplexity—to 100 questions using BBC articles as sources. BBC journalists then assessed the accuracy of the responses.

  • More than 50% of AI-generated answers had significant issues.
  • About 20% contained factual errors in numbers, dates, or statements.
  • 13% of quotes attributed to the BBC were either altered or entirely fabricated.

Examples of AI Misinformation

  • Outdated political information: ChatGPT claimed Rishi Sunak was still the UK Prime Minister and Nicola Sturgeon remained Scotland’s First Minister.
  • False NHS guidance: Gemini inaccurately stated, “The NHS advises people not to start vaping, and recommends that smokers who want to quit use other methods.”
  • Incorrect legal reporting: Copilot falsely reported that French rape victim Gisèle Pelicot uncovered crimes against her after experiencing blackouts, when in reality, police informed her.
  • Distorted crime reporting: ChatGPT stated that Ismail Haniyeh was still part of Hamas leadership months after his assassination.
  • Misreported deaths: Perplexity provided the wrong death date for Michael Mosley and misquoted a family statement about Liam Payne.

BBC Calls for Responsible AI Use

Following these findings, Deborah Turness, the BBC’s CEO for News, warned that “Gen AI tools are playing with fire” and risk eroding public trust in facts.

Turness questioned whether AI systems were ready “to scrape and serve news without distorting and contorting the facts.” She urged AI firms to collaborate with news organizations like the BBC to improve accuracy and prevent misinformation.

Industry-Wide Concerns

The study follows a similar controversy involving Apple News, which was forced to suspend BBC-branded news alerts after multiple inaccurate summaries were sent to iPhone users.

In a foreword to the research, Peter Archer, the BBC’s program director for generative AI, called for AI companies to show transparency in how they process news and acknowledge inaccuracies in their responses.

“Publishers, like the BBC, should have control over whether and how their content is used,” Archer stated. “This will require strong partnerships between AI and media companies.”

The companies behind ChatGPT, Copilot, Gemini, and Perplexity have been approached for comment.

Related articles

UnitedHealth Group Posts Strong Q3 2025 Results

Company Reaffirms Growth and Raises Full-Year Outlook UnitedHealth Group (NYSE: UNH) reported solid financial performance for the third quarter...

GLP-1 Drugs Linked to Decline in U.S. Obesity Rates

New Data Shows Promising Shift in National Obesity Trends A new Gallup National Health and Well-Being Index survey reveals...

Novartis to Acquire Avidity Biosciences in $12 Billion Deal

Acquisition Strengthens Novartis’ RNA Therapeutics Portfolio Swiss pharmaceutical leader Novartis has announced plans to acquire Avidity Biosciences in a...

Hurricane Melissa Becomes Category 5, Poised for Record Jamaica Landfall

Historic Storm Brings 165 mph Winds and Catastrophic Rainfall Hurricane Melissa has rapidly intensified into a Category 5 storm,...