In June 2025, a generative artificial intelligence (AI) user queried a large‑language‑model app about university admissions; the system produced an incorrect campus address, then doubled down and even "promised" to pay ¥100,000 if its answer proved wrong, inviting the user to sue at the Hangzhou Internet Court. The user sued the developer‑operator for damages.

In January 2026, the Hangzhou Internet Court dismissed the claims. Neither party appealed, and the judgment is now final.

Why does this matter?

Recent rulings by Chinese courts, most notably the Hangzhou Internet Court's "AI hallucination" case, clarify how civil liability applies to generative AI (GenAI).

Courts now apply a fault‑based framework tailored to AI risk, aligned with China's Interim Measures for the Management of Generative AI Services.

The direction is clear: strict on illegal content and transparency, pragmatic on accuracy.

Core judicial positions

A ''promise'' from AI cannot bind the provider

AI‑generated statements (e.g., compensation promises) do not constitute the provider's intent. AI is not a legal subject and cannot act as an agent or representative.

The AI in the case even "promised" to compensate ¥100,000 if its output was wrong and told the user to sue; the court held such AI‑generated statements do not create a legally effective declaration of intent and do not bind the provider.

It's been made clear in this case that AI providers are not bound by hallucinated promises but clear warnings to users are essential to avoid reliance risk.

GenAI is a service, not a product

Courts classify GenAI as a service, not a product under China's Product Quality Law. Strict product liability does not apply and therefore disputes are assessed under fault‑based tort principles. As a result of this, errors alone do not trigger liability, it is provider conduct that does.

Tiered duties of care (fault‑based)

Courts distinguish three obligations:

  1. Prohibited/illegal content – result duty

    If clearly illegal or harmful content is generated, fault is generally presumed.

    What to do: robust filters, moderation, rapid takedown.

  2. Functional limitations – disclosure duty

    Providers must give clear, prominent warnings that outputs may be inaccurate, with heightened alerts in high‑risk contexts (health, legal, safety).

    What to do: unavoidable UI notices and contextual prompts.

  3. Accuracy – means duty (not a guarantee)

    No obligation to ensure correctness. Courts assess whether reasonable technical and governance measures were adopted given current technology and risk.

    What to do: show reasonable processes (e.g., RAG, safeguards), not perfection.

Extensive Reading: Platform liability in GenAI copyright cases

In cases like "Ultraman", where courts found GenAI platforms liable because they enabled users to upload Ultraman images to train models that generated look‑alike outputs, amounting to contributory copyright infringement, courts treat GenAI platforms as online service providers and may find helping/secondary infringement where platforms profit and fail to implement reasonable controls (filters, labelling, complaints). Courts stop short of imposing excessive obligations where control is limited.

What should AI providers do next?

In summary, Chinese courts have firmly anchored generative AI liability in a fault‑based framework. Providers that can demonstrate robust controls (including filtering and takedown mechanisms), conspicuous and repeated user warnings, and reasonable technical safeguards (such as retrieval‑augmented generation) have, to date, succeeded in defending claims. The overarching message is that control and commercial benefit will result in higher expected duty.

Looking ahead, courts are likely to continue a case‑by‑case calibration of what constitutes "reasonable measures," increasingly benchmarking provider conduct against the Interim Measures for the Management of Generative AI Services and emerging national standards on labelling, transparency, and service security.

From a practical standpoint, AI providers should build and preserve a litigation‑ready compliance record, including governance and incident logs, version‑controlled UI warnings and service terms, consistent labelling practices, and well‑documented rights‑holder complaint and takedown workflows.