Italy’s antitrust regulator AGCM announced on Monday that it has launched an investigation into Chinese artificial intelligence startup DeepSeek, alleging the company failed to adequately inform users about the risk of inaccurate or misleading content generated by its AI technology.
Investigation Focuses on User Warnings
According to the AGCM, DeepSeek did not provide users with “sufficiently clear, immediate and intelligible” warnings regarding the possibility of so-called AI “hallucinations.” These occur when AI models generate responses containing false, misleading, or entirely fabricated information in response to user prompts.
Concerns Over AI-Generated Content
The regulator emphasized the need for transparency in disclosing the risks associated with AI-generated content. It stated that users must be properly informed about the potential for inaccurate outputs, particularly as AI tools increasingly influence decision-making and information consumption.
Previous Regulatory Action on DeepSeek
This latest investigation follows earlier action by another Italian agency. In February, Italy’s data protection authority ordered DeepSeek to block access to its chatbot after the company failed to adequately address concerns related to its privacy policy and data protection practices.
Company Response Pending
DeepSeek has not yet publicly commented on the latest investigation. The company did not immediately respond to requests for comment on Monday.
As AI tools become more widely adopted, regulators are increasing their scrutiny of how companies communicate risks to users. The Italian antitrust authority’s investigation into DeepSeek highlights growing concerns about transparency, consumer protection, and accountability in the development and deployment of AI technologies.