AI Assistants Found Spreading Misinformation in News Responses

Date:

Share post:

Leading artificial intelligence assistants are generating misleading content, factual inaccuracies, and distortions when responding to questions about current affairs, according to a BBC study.

Key Findings of the Research

The study examined responses from four popular AI tools—ChatGPT, Copilot, Gemini, and Perplexity—to 100 questions using BBC articles as sources. BBC journalists then assessed the accuracy of the responses.

  • More than 50% of AI-generated answers had significant issues.
  • About 20% contained factual errors in numbers, dates, or statements.
  • 13% of quotes attributed to the BBC were either altered or entirely fabricated.

Examples of AI Misinformation

  • Outdated political information: ChatGPT claimed Rishi Sunak was still the UK Prime Minister and Nicola Sturgeon remained Scotland’s First Minister.
  • False NHS guidance: Gemini inaccurately stated, “The NHS advises people not to start vaping, and recommends that smokers who want to quit use other methods.”
  • Incorrect legal reporting: Copilot falsely reported that French rape victim Gisèle Pelicot uncovered crimes against her after experiencing blackouts, when in reality, police informed her.
  • Distorted crime reporting: ChatGPT stated that Ismail Haniyeh was still part of Hamas leadership months after his assassination.
  • Misreported deaths: Perplexity provided the wrong death date for Michael Mosley and misquoted a family statement about Liam Payne.

BBC Calls for Responsible AI Use

Following these findings, Deborah Turness, the BBC’s CEO for News, warned that “Gen AI tools are playing with fire” and risk eroding public trust in facts.

Turness questioned whether AI systems were ready “to scrape and serve news without distorting and contorting the facts.” She urged AI firms to collaborate with news organizations like the BBC to improve accuracy and prevent misinformation.

Industry-Wide Concerns

The study follows a similar controversy involving Apple News, which was forced to suspend BBC-branded news alerts after multiple inaccurate summaries were sent to iPhone users.

In a foreword to the research, Peter Archer, the BBC’s program director for generative AI, called for AI companies to show transparency in how they process news and acknowledge inaccuracies in their responses.

“Publishers, like the BBC, should have control over whether and how their content is used,” Archer stated. “This will require strong partnerships between AI and media companies.”

The companies behind ChatGPT, Copilot, Gemini, and Perplexity have been approached for comment.

Related articles

Jim Beam to pause production at main Kentucky distillery

Temporary halt begins January 1 amid industry slowdown The maker of Jim Beam bourbon said it will pause production...

Thailand and Cambodia seek path to renewed ceasefire

Talks planned as border clashes enter third week Officials from Thailand and Cambodia are set to meet next week...

Flu activity rises across the US as cases climb

CDC reports increasing flu spread nationwide Flu activity is rising across the United States, according to new data from...

Valve phases out Steam Deck LCD as OLED becomes standard

LCD era comes to an end Valve has officially ended production of its last remaining LCD-based Steam Deck model....