Ivy Liang
Director
Article
3
In June 2025, a generative artificial intelligence (AI) user queried a large‑language‑model app about university admissions; the system produced an incorrect campus address, then doubled down and even "promised" to pay ¥100,000 if its answer proved wrong, inviting the user to sue at the Hangzhou Internet Court. The user sued the developer‑operator for damages.
In January 2026, the Hangzhou Internet Court dismissed the claims. Neither party appealed, and the judgment is now final.
Recent rulings by Chinese courts, most notably the Hangzhou Internet Court's "AI hallucination" case, clarify how civil liability applies to generative AI (GenAI).
Courts now apply a fault‑based framework tailored to AI risk, aligned with China's Interim Measures for the Management of Generative AI Services.
The direction is clear: strict on illegal content and transparency, pragmatic on accuracy.
AI‑generated statements (e.g., compensation promises) do not constitute the provider's intent. AI is not a legal subject and cannot act as an agent or representative.
The AI in the case even "promised" to compensate ¥100,000 if its output was wrong and told the user to sue; the court held such AI‑generated statements do not create a legally effective declaration of intent and do not bind the provider.
It's been made clear in this case that AI providers are not bound by hallucinated promises but clear warnings to users are essential to avoid reliance risk.
Courts classify GenAI as a service, not a product under China's Product Quality Law. Strict product liability does not apply and therefore disputes are assessed under fault‑based tort principles. As a result of this, errors alone do not trigger liability, it is provider conduct that does.
Courts distinguish three obligations:
In cases like "Ultraman", where courts found GenAI platforms liable because they enabled users to upload Ultraman images to train models that generated look‑alike outputs, amounting to contributory copyright infringement, courts treat GenAI platforms as online service providers and may find helping/secondary infringement where platforms profit and fail to implement reasonable controls (filters, labelling, complaints). Courts stop short of imposing excessive obligations where control is limited.
In summary, Chinese courts have firmly anchored generative AI liability in a fault‑based framework. Providers that can demonstrate robust controls (including filtering and takedown mechanisms), conspicuous and repeated user warnings, and reasonable technical safeguards (such as retrieval‑augmented generation) have, to date, succeeded in defending claims. The overarching message is that control and commercial benefit will result in higher expected duty.
Looking ahead, courts are likely to continue a case‑by‑case calibration of what constitutes "reasonable measures," increasingly benchmarking provider conduct against the Interim Measures for the Management of Generative AI Services and emerging national standards on labelling, transparency, and service security.
From a practical standpoint, AI providers should build and preserve a litigation‑ready compliance record, including governance and incident logs, version‑controlled UI warnings and service terms, consistent labelling practices, and well‑documented rights‑holder complaint and takedown workflows.
NOT LEGAL ADVICE. Information made available on this website in any form is for information purposes only. It is not, and should not be taken as, legal advice. You should not rely on, or take or fail to take any action based upon this information. Never disregard professional legal advice or delay in seeking legal advice because of something you have read on this website. Gowling WLG professionals will be pleased to discuss resolutions to specific legal concerns you may have.