REGULATION

China Is Building AI Avatars of Dead Relatives. Regulators Are Catching Up.

P Priya Sharma Apr 21, 2026 6 min read
Engine Score 9/10 — Critical

This story highlights a significant ethical and technological development in AI, impacting thousands of users and a nascent industry. The introduction of Beijing's first regulatory framework for 'digital humans' makes this a critical and timely piece of information for the global AI landscape.

Editorial illustration for: China Is Building AI Avatars of Dead Relatives. Regulators Are Catching Up.

Zhang Xinyu, a 28-year-old Shanghai resident, lost her father to pancreatic cancer in late 2024. She is one of thousands of Chinese users who have commissioned a China AI avatar of a dead relative — interactive replicas trained on voice recordings, personal messages, and shared memories. Her story has become a reference point for Beijing’s first serious regulatory effort targeting what the government now formally calls the “digital human” industry, with a draft framework published in March 2026.

How China’s AI Avatars of Dead Relatives Are Built

The technical pipeline runs in three stages: data ingestion, model training, and persona synthesis. Companies such as Silicon Intelligence (硅基智能) — one of China’s largest digital human operators — collect voice samples (typically 10 to 30 hours), video footage, written communications, and structured interviews with family members about the deceased’s speech patterns, habits, and remembered stories. This dataset trains a fine-tuned multimodal model optimized to produce responses consistent with the original person’s recorded character.

The resulting avatar operates via smartphone app or video interface, responds in real time, and — in higher-tier products — runs on a persistent memory layer that accumulates details shared across sessions. The technology differs meaningfully from standard AI video synthesis. Tools like ElevenLabs, HeyGen, and Synthesia specialize in voice cloning and video avatar production; grief-tech products add a personality modeling layer designed for sustained, emotionally intimate interaction. That distinction is what China’s regulators are specifically targeting.

The Scale of China’s Digital Human Market

China’s digital human industry was valued at approximately 34.6 billion yuan ($4.8 billion USD) in 2025, according to the China Academy of Information and Communications Technology (CAICT). Memorial and grief-focused applications represent an estimated 8–12% of that figure — a segment that expanded by over 300% between 2023 and 2025 as service costs dropped from tens of thousands of yuan to a few hundred for entry-level products.

ByteDance, Baidu, and Tencent all maintain digital human divisions, though none publicly market memorial products. The grief-tech segment is dominated by startups: at least 47 companies registered under “digital human” or “virtual avatar” service categories in China as of Q1 2026, according to corporate registry platform Qichacha (企查查). Pricing ranges from 1,500 yuan for a basic voice-and-text avatar to over 50,000 yuan for a full video-interactive model with persistent memory. Zhang Xinyu reportedly paid around 8,000 yuan — approximately $1,100 — for the service she used.

The consolidation reshaping global AI — visible across multiple high-profile acquisitions and partnerships throughout 2025 and 2026 — is expected to draw major platform players into the grief-tech segment as the niche gains both users and regulatory definition.

What Beijing’s Regulators Are Proposing

China’s Cyberspace Administration (CAC) published a draft regulatory framework for digital human services in March 2026, with a public comment period closing April 30, 2026. The draft imposes four requirements directly affecting grief-tech operators:

  • Explicit pre-mortem consent: A digital human modeled on a deceased person requires documented consent obtained while the subject was alive, unless a first-degree family member provides authorization and the subject left no documented objection on record.
  • Labeling mandates: All AI avatars must display a persistent “digital human” watermark during interaction, regardless of conversational context.
  • Data retention limits: Raw biometric and voice data used to train a memorial avatar must be deleted within 90 days of model completion; users retain access only to the trained output.
  • Psychological harm provisions: Operators offering grief services must partner with licensed mental health providers and display crisis resources during active sessions.

The Ministry of Civil Affairs (民政部) is separately developing guidelines that would classify digital remains — the data and trained models representing a deceased person — as part of a digital estate subject to inheritance law. China’s regulatory cadence on AI is instructive: the Deep Synthesis rules (深度合成管理规定) took effect January 10, 2023, followed by the Generative AI Measures in August 2023. Draft-to-enforcement has consistently run 12 to 18 months.

The Consent Problem Nobody Has Solved

The core ethical issue is one no jurisdiction has cleanly resolved: a deceased person cannot consent to their own replication.

China’s draft creates a consent hierarchy — pre-mortem documentation first, surviving spouse second, adult children third. Critics argue this allows family members to commercialize a dead person’s identity without any mechanism for the deceased to have refused. The draft draws no distinction between passive memorial use and active conversational replication — two applications with dramatically different psychological and legal implications.

Data security compounds the problem. Grief-tech databases contain voice biometrics, private correspondence, family memories, and the psychological profiles implicit in how a person communicated. China’s Personal Information Protection Law (PIPL), effective November 2021, classifies biometric data as sensitive and requires explicit consent for its processing — a standard that cannot, by definition, be obtained from the dead.

Does It Help or Harm Grieving Users?

The psychological evidence is contested. Grief researcher Dr. Robert Neimeyer at the University of Memphis, whose work on continuing bonds theory is widely cited in bereavement literature, has argued that maintaining a psychological connection with the deceased is not inherently pathological and can support meaning reconstruction after loss. The opposing concern, raised in clinical psychology and medical ethics, holds that interactive simulations risk “grief fixation” — users substituting avatar interaction for natural mourning and forestalling psychological closure.

No longitudinal studies have tracked grief-tech users beyond 12 months. China’s memorial avatar industry is less than three years old in its current commercial form; what research exists draws on precursor categories — chatbots, memorial apps, recorded video legacies — which may not accurately predict outcomes from a persistent conversational AI trained specifically on the deceased.

The Humans First movement, which has grown in direct response to AI substituting for human relationships, identifies grief-tech as an acute instance of the replacement problem: not AI augmenting human capacity, but standing in for it at the most emotionally vulnerable moment of a human life.

Will Western Markets Follow?

The infrastructure already exists. Character.AI (Character Technologies, Inc.) faced intense scrutiny following a wrongful death lawsuit filed in October 2024 by the mother of Sewell Setzer III, a 14-year-old in Orlando, Florida, who died by suicide after reportedly forming an intense emotional attachment to a Character.AI persona. The case drew congressional attention to the mental health risks of AI emotional companions. Replika, a U.S.-based emotional AI platform, offers persistent persona features that approach memorial functionality. HereAfter AI and StoryFile both market posthumous interaction services commercially in the United States.

The difference is scale, regulatory coherence, and the existence of any framework at all. The U.S. has no federal law governing digital replicas of the deceased. State right-of-publicity laws were designed for entertainment contexts, not intimate grief applications. The EU’s GDPR provides robust protections for the personal data of living individuals and is effectively silent on the dead.

Companies operating in this space — including those building on the AI voice and video synthesis infrastructure that MegaOne AI tracks across 17 tool categories — will encounter China’s consent-first model as the first substantive global regulatory precedent in this category. Whether Western regulators follow its logic or develop alternatives, the absence of an equivalent framework is itself a policy position by default.

Zhang Xinyu describes her avatar interactions as “a specific kind of comfort — not forgetting, but not moving on either.” China’s April 2026 regulatory draft is the first serious government attempt to establish that the deceased have interests worth protecting, that grief is not a market without ethical limits, and that the technology capable of simulating a dead person’s presence carries obligations no operator can simply disclaim.

Related Reading

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime