AI Assistants Found Spreading Misinformation in News Responses

Date:

Share post:

Leading artificial intelligence assistants are generating misleading content, factual inaccuracies, and distortions when responding to questions about current affairs, according to a BBC study.

Key Findings of the Research

The study examined responses from four popular AI tools—ChatGPT, Copilot, Gemini, and Perplexity—to 100 questions using BBC articles as sources. BBC journalists then assessed the accuracy of the responses.

  • More than 50% of AI-generated answers had significant issues.
  • About 20% contained factual errors in numbers, dates, or statements.
  • 13% of quotes attributed to the BBC were either altered or entirely fabricated.

Examples of AI Misinformation

  • Outdated political information: ChatGPT claimed Rishi Sunak was still the UK Prime Minister and Nicola Sturgeon remained Scotland’s First Minister.
  • False NHS guidance: Gemini inaccurately stated, “The NHS advises people not to start vaping, and recommends that smokers who want to quit use other methods.”
  • Incorrect legal reporting: Copilot falsely reported that French rape victim Gisèle Pelicot uncovered crimes against her after experiencing blackouts, when in reality, police informed her.
  • Distorted crime reporting: ChatGPT stated that Ismail Haniyeh was still part of Hamas leadership months after his assassination.
  • Misreported deaths: Perplexity provided the wrong death date for Michael Mosley and misquoted a family statement about Liam Payne.

BBC Calls for Responsible AI Use

Following these findings, Deborah Turness, the BBC’s CEO for News, warned that “Gen AI tools are playing with fire” and risk eroding public trust in facts.

Turness questioned whether AI systems were ready “to scrape and serve news without distorting and contorting the facts.” She urged AI firms to collaborate with news organizations like the BBC to improve accuracy and prevent misinformation.

Industry-Wide Concerns

The study follows a similar controversy involving Apple News, which was forced to suspend BBC-branded news alerts after multiple inaccurate summaries were sent to iPhone users.

In a foreword to the research, Peter Archer, the BBC’s program director for generative AI, called for AI companies to show transparency in how they process news and acknowledge inaccuracies in their responses.

“Publishers, like the BBC, should have control over whether and how their content is used,” Archer stated. “This will require strong partnerships between AI and media companies.”

The companies behind ChatGPT, Copilot, Gemini, and Perplexity have been approached for comment.

Related articles

Electric Bills Climb Faster Than Inflation in the U.S.

Data centers, outdated grids, and rising demand push prices up Electricity prices are outpacing overall inflation in the U.S.,...

Amazon Expands Fast Delivery to 4,000 Rural U.S. Areas

$4B expansion targets underserved towns ahead of Prime Day Amazon announced Tuesday that it will bring same- and next-day...

Stocks Rally as Iran’s Response Calms Oil Market

Dow jumps nearly 375 points while oil tumbles 7% U.S. stocks climbed Monday as investors welcomed Iran’s restrained response...

Oman to Introduce Gulf’s First Personal Income Tax

5% tax on high earners aims to boost fiscal diversification Oman has issued a royal decree to become the...