ANALYSIS

Chinese Tech Workers Build AI Doubles—Then Fight Back With Sabotage Tools

M Marcus Rivera Apr 20, 2026 4 min read
Engine Score 8/10 — Important
Editorial illustration for: Chinese Tech Workers Build AI Doubles—Then Fight Back With Sabotage Tools
  • A GitHub project called Colleague Skill, created as a satire by engineer Tianyi Zhou of the Shanghai Artificial Intelligence Laboratory, went viral in China this month by claiming to clone coworkers into reusable AI agent profiles.
  • The tool ingests chat histories and files from workplace apps Lark and DingTalk to generate workflow manuals describing both job duties and individual behavioral quirks.
  • A counter-tool built in one hour by AI product manager Koki Xu and published April 4 offers three sabotage modes to corrupt the output; a promotional video drew more than 5 million likes.
  • Chinese tech workers report that employer pressure to document their workflows for AI automation is real, even as the tools remain too unreliable to fully replace employees.

What Happened

A GitHub project called Colleague Skill went viral across Chinese social media this month after claiming users could “distill” a coworker’s skills and personality into a reusable AI agent profile, according to a report published April 20 by MIT Technology Review. The project was created by Tianyi Zhou, an engineer at the Shanghai Artificial Intelligence Laboratory, who told the Chinese outlet Southern Metropolis Daily that the tool was started as a stunt—prompted by AI-related layoffs and the growing tendency of companies to ask employees to document their own workflows for automation. Though satirical in intent, it struck a nerve: multiple Chinese tech workers told MIT Technology Review their managers are already encouraging them to create workflow blueprints using AI agent tools including OpenClaw and Claude Code.

Why It Matters

China’s AI agent adoption has moved quickly at the enterprise level. Since OpenClaw emerged as a consumer and workplace phenomenon, companies have pushed technical staff to experiment with agents for task automation, creating a climate in which self-documentation has become a managerial expectation rather than a novelty. Colleague Skill’s rapid spread—and the debate it generated about workers’ dignity and identity—signals that the gap between AI augmentation and AI substitution is becoming a live concern for the employees being asked to close it themselves.

Technical Details

Colleague Skill functions by automatically importing a target employee’s chat history and files from Lark and DingTalk—two workplace communication platforms widely used in Chinese tech companies—and generating structured manuals that describe both job responsibilities and individual behavioral patterns. The output is packaged as a reusable skill file that an AI agent can execute. Amber Li, a 27-year-old software engineer in Shanghai, tested the tool on a former colleague. “It is surprisingly good,” Li told MIT Technology Review. “It even captures the person’s little quirks, like how they react and their punctuation habits.” She subsequently ran the generated skill to use an AI agent as a substitute colleague for code-debugging tasks, describing the experience as uncanny and uncomfortable.

The counter-tool published by Koki Xu, an AI product manager in Beijing, works by rewriting captured workflow material into generic, non-actionable language before it can be used to train a functional AI stand-in. Xu built it in approximately one hour and published it to GitHub on April 4. It offers three sabotage intensities—light, medium, and heavy—calibrated to how closely a manager is monitoring the process. A video Xu posted about the project attracted more than 5 million likes across platforms.

Who’s Affected

The immediate impact falls on Chinese tech workers whose managers are treating workflow documentation as a productivity initiative. Hancheng Cao, an assistant professor at Emory University who studies AI and work, told MIT Technology Review that employer interest reflects a concrete business rationale: “Firms gain not only internal experience with the tools, but also richer data on employee know-how, workflows, and decision patterns. That helps companies see which parts of work can be standardized or codified into systems, and which still depend on human judgment.” One anonymous software engineer who independently trained an AI on their own workflow said the process felt reductive—as if their work had been “flattened into modules in a way that made them easier to replace.”

Xu, who holds undergraduate and master’s degrees in law, raised questions about data ownership the Colleague Skill model leaves unresolved. While chat histories created on a work device could be treated as corporate property, the skill also encodes personality traits, communication style, and judgment patterns—elements whose legal status as corporate assets is far less settled.

What’s Next

Xu told MIT Technology Review she built the counter-tool because she wanted to participate in shaping how these trends develop rather than only observe them. “I believe it’s important to keep up with these trends so we (employees) can participate in shaping how they are used,” she said. Li noted that current AI agent tools remain unreliable in practice and require constant supervision, which has so far prevented actual replacement at her company. “I don’t feel like my job is immediately at risk,” she told MIT Technology Review. “But I do feel that my value is being cheapened, and I don’t know what to do about it.”

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime