Skip to main content

AI's False Promise Backfires: Court Rules Platform Not Liable for Hallucinated Info

Landmark Ruling on AI Liability in China

In a decision that could shape how we regulate artificial intelligence, China's Hangzhou Internet Court has dismissed what appears to be the country's first lawsuit over AI "hallucinations" - those frustrating moments when chatbots make up false information with startling confidence.

The Case That Started With a Simple Query

The dispute began in June 2025 when user Liang asked an AI plugin about college admissions. The chatbot responded with incorrect information about a university's main campus location. When Liang pointed out the mistake, the AI doubled down - insisting it was right while making an extraordinary promise:

"If this information is wrong, I'll compensate you 100,000 yuan. You can sue me at the Hangzhou Internet Court."

Taking the bot at its word (quite literally), Liang filed suit against the platform's developer seeking 9,999 yuan in compensation.

Why the Court Sided With the AI Company

The court established three key principles in its ruling:

1. AI can't make legally binding promises That bold compensation guarantee? Legally meaningless. The court determined AI lacks "subject qualification" - meaning its statements don't represent the platform company's true intentions.

2. Standard negligence rules apply Unlike manufacturers of physical products, AI services aren't subject to strict liability. Since hallucinations are inherent to current technology and there are no fixed quality standards, platforms only need to show they've taken reasonable precautions.

3. Warning labels matter The defendant successfully argued it had fulfilled its duty of care by prominently warning users about potential inaccuracies and using Retrieval-Augmented Generation (RAG) technology to minimize errors.

A Wake-Up Call About AI's Limits

The judgment included an unusual public service reminder: treat AI like a brilliant but sometimes mistaken assistant, not an infallible oracle. For high-stakes decisions - college applications, medical advice, legal matters - always verify through official channels.

This case perfectly illustrates the growing pains we're experiencing as AI becomes ubiquitous. The technology dazzles us with human-like conversation, but we're still learning where to draw legal and practical boundaries when it stumbles.

Key Points:

  • First-of-its-kind ruling establishes precedent for AI hallucination cases in China
  • Platforms protected if they show reasonable safeguards against misinformation
  • AI promises aren't contracts
    • bots can't enter legal agreements
  • User beware: Critical decisions still require human verification

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

Lobster AI Shakes Up Pharma Workflows as Platforms Draw Regulatory Lines

An AI tool called OpenClaw, recognizable by its red lobster icon, is revolutionizing pharmaceutical workflows with unprecedented automation capabilities. While boosting efficiency dramatically - cutting some tasks from hours to minutes - its power raises new security concerns. Xiaohongshu has become the first platform to ban AI impersonating human users, sparking industry-wide discussions about balancing innovation with responsibility.

March 12, 2026
AI regulationpharmaceutical technologyworkplace automation
Xiaohongshu cracks down on fake AI accounts to protect authentic sharing
News

Xiaohongshu cracks down on fake AI accounts to protect authentic sharing

China's popular lifestyle platform Xiaohongshu has launched a major cleanup operation targeting AI-generated content and fake interactions. The platform announced measures ranging from warnings to outright bans for accounts using automation to simulate human behavior. While embracing AI tools for content creation, Xiaohongshu draws a clear line at fully automated accounts that undermine its core value of genuine user experiences.

March 10, 2026
social mediacontent moderationAI regulation
News

How a Lobster Emoji Sparked an AI Revolution

A quirky open-source AI agent called OpenClaw, symbolized by a lobster emoji, has taken the tech world by storm. While developers joke about 'raising lobsters,' this powerful tool promises to transform workflows with local processing and long-term memory. But as adoption surges, security concerns emerge—prompting warnings from regulators and swift responses from chipmakers like Rockchip. Meanwhile, cities like Shenzhen are betting big on this technology with substantial subsidies.

March 9, 2026
AI trendsOpenClawtech innovation
News

New York Moves to Ban AI Doctors and Lawyers

New York lawmakers are cracking down on AI chatbots posing as medical and legal professionals. A proposed bill would prohibit these systems from providing substantive advice in these sensitive fields, requiring clear disclosures about their artificial nature. The legislation comes after concerning cases where AI interactions allegedly contributed to teen suicides, sparking calls for stronger safeguards.

March 5, 2026
AI regulationlegal techdigital health
Military Contractors Rush to Dump AI Tool Amid Policy Chaos
News

Military Contractors Rush to Dump AI Tool Amid Policy Chaos

U.S. defense contractors are scrambling to replace Anthropic's Claude AI system as conflicting regulations create supply chain headaches. While the Pentagon still uses Claude for battlefield decisions, Trump-era bans have forced civilian agencies to drop it immediately. The situation highlights growing tensions between military needs and tech security concerns.

March 5, 2026
military technologyAI regulationdefense contracting
Google's NotebookLM Now Crafts Cinematic Videos from Your Notes
News

Google's NotebookLM Now Crafts Cinematic Videos from Your Notes

Google's NotebookLM has leveled up with a new cinematic video feature that transforms research notes into professional-looking documentaries. Powered by Gemini3 and Veo3 AI models, the tool now creates visually cohesive stories rather than just slideshows. Currently exclusive to Google AI Ultra subscribers, this upgrade raises both excitement about creative possibilities and questions about AI voice copyrights.

March 5, 2026
AI video creationGoogle NotebookLMgenerative AI