REGULATION

AI Forensics Documents Monetized Nudifying Bot Ecosystem on Telegram

P Priya Sharma Apr 9, 2026 3 min read
Engine Score 7/10 — Important

Investigation into AI-powered abuse ecosystem on Telegram — high social impact

Editorial illustration for: AI Forensics Documents Monetized Nudifying Bot Ecosystem on Telegram
  • AI Forensics analyzed 2.8 million Telegram messages from 16 groups in Italy and Spain, identifying 24,671 active users in networks distributing non-consensual intimate imagery.
  • Among redirect links shared in analyzed groups, 49.71 percent led to AI girlfriend generators and 19.14 percent pointed to nudifying bots that generate synthetic nude images from ordinary photos.
  • Archives of non-consensual intimate images — including child sexual abuse material — are sold for 20 to 50 euros, with affiliates claiming up to 40 percent commission on bot referrals.
  • AI Forensics is calling for Telegram’s classification as a Very Large Online Platform under the EU Digital Services Act and an EU-wide ban on nudifying tools.

What Happened

The nonprofit research organization AI Forensics published a report documenting how nudifying bots, deepfake generation, and automated image archives together form a monetized ecosystem on Telegram for distributing non-consensual intimate imagery (NCII). The analysis covered 2.8 million messages across 16 Italian and Spanish Telegram groups and channels. The word “bot” appeared 16,232 times across the full dataset — a measure of how central automated tools are to the infrastructure.

Why It Matters

The report provides quantified evidence of how synthetic image generation tools have industrialized the production and distribution of NCII, with the report stating that AI has lowered the technical barrier to the point where “the number of potential victims is growing dramatically.” Telegram has faced prior regulatory pressure over NCII: founder Pavel Durov was detained by French authorities in August 2024 over allegations related to platform governance before being released under judicial supervision. The platform has not been classified as a Very Large Online Platform under the EU’s Digital Services Act, which would impose mandatory content moderation audits.

Technical Details

Among disguised redirect links shared within the analyzed groups, 49.71 percent led to AI girlfriend generator services and 19.14 percent directed users to nudifying bots — services that transform standard photographs into synthetic nude images without the subjects’ consent. Researchers also documented users sharing prompts specifically engineered to extract intimate image generation from commercial AI models including Grok and Gemini. Under the hashtag #PornoTok, participants generated synthetic intimate images and fabricated audio clips mimicking the voices of named female TikTok influencers. Telegram’s own premium infrastructure — including folder-based channel organization and bot-controlled access gating — facilitates the distribution networks; the platform earned $292 million in premium subscription revenue in 2024, according to the report.

Who’s Affected

The study identified 24,671 active users in groups with membership reaching up to 27,000 individuals. Cross-border coordination is documented: 72 percent of Spanish content also appeared in Italian groups. Victims include female TikTok influencers whose likenesses and synthesized voices were used without consent, as well as individuals whose images appeared in sold archives that AI Forensics confirmed also contained child sexual abuse material. Payments for these archives were processed through PayPal, cryptocurrencies, and the Spanish payment service Bizum, with monthly subscription access available from five euros.

What’s Next

AI Forensics is urging European regulators to classify Telegram as a Very Large Online Platform under the Digital Services Act, which would subject it to mandatory risk assessments and content moderation obligations. The organization is also calling for an EU-wide ban on nudifying tools and for the AI Act to mandate explicit safeguards against synthetic NCII. No regulatory timeline has been announced in response to these recommendations.

Related Reading

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime