AI Assistants Found Spreading Misinformation in News Responses

Date:

Share post:

Leading artificial intelligence assistants are generating misleading content, factual inaccuracies, and distortions when responding to questions about current affairs, according to a BBC study.

Key Findings of the Research

The study examined responses from four popular AI tools—ChatGPT, Copilot, Gemini, and Perplexity—to 100 questions using BBC articles as sources. BBC journalists then assessed the accuracy of the responses.

  • More than 50% of AI-generated answers had significant issues.
  • About 20% contained factual errors in numbers, dates, or statements.
  • 13% of quotes attributed to the BBC were either altered or entirely fabricated.

Examples of AI Misinformation

  • Outdated political information: ChatGPT claimed Rishi Sunak was still the UK Prime Minister and Nicola Sturgeon remained Scotland’s First Minister.
  • False NHS guidance: Gemini inaccurately stated, “The NHS advises people not to start vaping, and recommends that smokers who want to quit use other methods.”
  • Incorrect legal reporting: Copilot falsely reported that French rape victim Gisèle Pelicot uncovered crimes against her after experiencing blackouts, when in reality, police informed her.
  • Distorted crime reporting: ChatGPT stated that Ismail Haniyeh was still part of Hamas leadership months after his assassination.
  • Misreported deaths: Perplexity provided the wrong death date for Michael Mosley and misquoted a family statement about Liam Payne.

BBC Calls for Responsible AI Use

Following these findings, Deborah Turness, the BBC’s CEO for News, warned that “Gen AI tools are playing with fire” and risk eroding public trust in facts.

Turness questioned whether AI systems were ready “to scrape and serve news without distorting and contorting the facts.” She urged AI firms to collaborate with news organizations like the BBC to improve accuracy and prevent misinformation.

Industry-Wide Concerns

The study follows a similar controversy involving Apple News, which was forced to suspend BBC-branded news alerts after multiple inaccurate summaries were sent to iPhone users.

In a foreword to the research, Peter Archer, the BBC’s program director for generative AI, called for AI companies to show transparency in how they process news and acknowledge inaccuracies in their responses.

“Publishers, like the BBC, should have control over whether and how their content is used,” Archer stated. “This will require strong partnerships between AI and media companies.”

The companies behind ChatGPT, Copilot, Gemini, and Perplexity have been approached for comment.

Related articles

Cambridge Dictionary Adds 6,000 New Words

The Cambridge Dictionary has added more than 6,000 new words and phrases this year, reflecting the evolving influence...

Novo Nordisk Cuts Ozempic Price for Cash Patients

Novo Nordisk has announced a major price reduction for its widely used diabetes drug Ozempic in the United...

Hawley Probes Meta Over AI Chatbot Child Safety

Senator Launches Investigation Sen. Josh Hawley, R-Mo., announced an investigation into Meta after a report alleged the company approved...

Needle-Free Flu Vaccine Now Available for Home Use

Rising Concerns Over Flu-Related Deaths The latest flu season recorded more child fatalities than any year since the 2009...