AI Assistants Found Spreading Misinformation in News Responses

Date:

Share post:

Leading artificial intelligence assistants are generating misleading content, factual inaccuracies, and distortions when responding to questions about current affairs, according to a BBC study.

Key Findings of the Research

The study examined responses from four popular AI tools—ChatGPT, Copilot, Gemini, and Perplexity—to 100 questions using BBC articles as sources. BBC journalists then assessed the accuracy of the responses.

  • More than 50% of AI-generated answers had significant issues.
  • About 20% contained factual errors in numbers, dates, or statements.
  • 13% of quotes attributed to the BBC were either altered or entirely fabricated.

Examples of AI Misinformation

  • Outdated political information: ChatGPT claimed Rishi Sunak was still the UK Prime Minister and Nicola Sturgeon remained Scotland’s First Minister.
  • False NHS guidance: Gemini inaccurately stated, “The NHS advises people not to start vaping, and recommends that smokers who want to quit use other methods.”
  • Incorrect legal reporting: Copilot falsely reported that French rape victim Gisèle Pelicot uncovered crimes against her after experiencing blackouts, when in reality, police informed her.
  • Distorted crime reporting: ChatGPT stated that Ismail Haniyeh was still part of Hamas leadership months after his assassination.
  • Misreported deaths: Perplexity provided the wrong death date for Michael Mosley and misquoted a family statement about Liam Payne.

BBC Calls for Responsible AI Use

Following these findings, Deborah Turness, the BBC’s CEO for News, warned that “Gen AI tools are playing with fire” and risk eroding public trust in facts.

Turness questioned whether AI systems were ready “to scrape and serve news without distorting and contorting the facts.” She urged AI firms to collaborate with news organizations like the BBC to improve accuracy and prevent misinformation.

Industry-Wide Concerns

The study follows a similar controversy involving Apple News, which was forced to suspend BBC-branded news alerts after multiple inaccurate summaries were sent to iPhone users.

In a foreword to the research, Peter Archer, the BBC’s program director for generative AI, called for AI companies to show transparency in how they process news and acknowledge inaccuracies in their responses.

“Publishers, like the BBC, should have control over whether and how their content is used,” Archer stated. “This will require strong partnerships between AI and media companies.”

The companies behind ChatGPT, Copilot, Gemini, and Perplexity have been approached for comment.

Related articles

Novo Nordisk sues Hims over Wegovy copycat drugs

Novo Nordisk has filed a lawsuit against online telehealth provider Hims & Hers, accusing the company of illegally...

Air Canada halts Cuba flights amid fuel shortage

Air Canada has suspended its flights to Cuba following a severe shortage of aviation fuel on the island,...

Measles exposure feared at National March for Life in D.C.

Health officials trace confirmed cases to major January events Health authorities in Washington, D.C. are warning that confirmed cases...

YouTube Music rolls out Premium paywall for lyrics

Lyrics now locked behind paid subscriptions YouTube Music has begun a wide rollout of a Premium paywall for song...