ANALYSIS

North Korean Hackers Used ChatGPT and Cursor to Steal $12M in Crypto, Expel Finds

A Anika Patel Apr 23, 2026 4 min read
Engine Score 7/10 — Important
Editorial illustration for: North Korean Hackers Used ChatGPT and Cursor to Steal $12M in Crypto, Expel Finds
  • Cybersecurity firm Expel identified a North Korean state-sponsored group called HexagonalRodent that installed credential-stealing malware on more than 2,000 computers targeting cryptocurrency and Web3 developers.
  • The group used OpenAI’s ChatGPT, Cursor, and Anima to write malware and fabricate company websites, stealing an estimated $12 million in cryptocurrency over three months.
  • Security researcher Marcus Hutchins identified the malware as likely entirely AI-generated, citing extensive English-language code comments and emojis embedded in source files as indicators.
  • Expel estimates 31 individual operators participated in the campaign, a figure Hutchins said reflects how AI is enabling North Korea to scale operations staffed by low-skill workers.

What Happened

On Wednesday, cybersecurity firm Expel disclosed a North Korean state-sponsored hacking operation in which a group it calls HexagonalRodent installed credential-stealing malware on more than 2,000 computers, targeting developers working on small-scale cryptocurrency launches, NFT projects, and Web3 applications. The group used AI tools from US-based companies — including OpenAI’s ChatGPT, Cursor, and Anima — to write its malware and construct the fraudulent company websites central to its phishing infrastructure. Over approximately three months, the operation is estimated to have stolen as much as $12 million in cryptocurrency.

Marcus Hutchins, the security researcher who identified HexagonalRodent and is widely known for disabling the WannaCry ransomware worm attributed to North Korea in 2017, said the group’s defining quality was not technical capability but the degree to which AI tools compensated for its absence. “These operators don’t have the skills to write code. They don’t have the skills to set up infrastructure. AI is actually enabling them to do things that they otherwise just would not be able to do,” Hutchins said.

Why It Matters

North Korea has operated one of the world’s most active state-sponsored cybercrime programs for years, deploying hackers and fraudulent IT workers to generate revenue that, according to security researchers, funds the country’s nuclear program and helps it circumvent international sanctions. The HexagonalRodent campaign is one instance in a documented pattern: North Korean operators have repeatedly been observed integrating commercial AI tools into hacking, fraud, and social engineering workflows across multiple groups.

Michael “Barni” Barnhart, a researcher at security firm DTEX who has tracked North Korean cyber operations for years, said the country’s AI adoption spans far more than malware authorship. “North Korea is using AI as a force multiplier, and it is helping with every aspect — building resumes, building websites, building exploits, testing vulnerabilities — and they’re doing it at speed and scale,” Barnhart said. North Korea’s Reconnaissance General Bureau has reportedly established Research Center 227, an organization tasked in part with developing AI-focused offensive hacking tooling for use by state-affiliated units.

Technical Details

HexagonalRodent’s intrusion chain began with fraudulent job offers from fabricated technology companies — complete with AI-generated websites — targeting developers in cryptocurrency and Web3 communities. Victims were eventually asked to download a coding assignment as part of a simulated hiring process; that file contained malware that, once executed, stole credentials and, in some cases, keys controlling cryptocurrency wallet access.

Hutchins analyzed malware samples and found two consistent indicators of AI authorship: comprehensive English-language inline comments throughout the code, inconsistent with typical North Korean developer practice, and emoji characters embedded directly in source files — a pattern he described as “a pretty well-documented sign of AI-written code,” noting that developers on PC keyboards rarely insert emojis manually. Command-and-control server infrastructure tied the malware to previously identified North Korean hacking operations.

The group left portions of its own infrastructure unsecured, inadvertently exposing both the AI prompts submitted to ChatGPT and Cursor to generate the malware and a database tracking victim cryptocurrency wallet addresses. Wallets logged in that database held a combined total of approximately $12 million, though Expel noted it could not confirm in every case whether the full balance had been drained or whether hardware security tokens had blocked wallet access in some instances.

Who’s Affected

The campaign focused on individual developers in cryptocurrency, NFT, and Web3 spaces — a demographic Hutchins identified as disproportionately likely to lack the endpoint detection and response (EDR) tools standard in enterprise environments. The AI-generated malware followed behavioral patterns that such tools would typically flag. “They found a niche where you actually can get away with completely AI-generated malware,” Hutchins said. Developers building independently on crypto and Web3 projects remain the segment most exposed to similar campaigns.

North Korea’s broader IT worker program, which places workers inside technology companies while they pose as citizens of other countries, has separately adopted AI tools including deepfakes and AI voice assistants to pass job interviews at Western firms, according to Barnhart and research published by Microsoft tracking the program.

What’s Next

Rather than reducing headcount through automation, Expel observed that HexagonalRodent involved as many as 31 individual operators — a number Hutchins described as consistent with a broader trend of North Korean cyber units scaling up by equipping low-skill workers with AI access. “They just keep adding more and more operators. Because they can just hand them access to an AI model, and they can now do things which they would have previously needed a development team to support,” he said.

Expel’s disclosure, covered by Wired on April 23, 2026, does not include a public technical report with indicators of compromise. Hutchins noted that standard EDR deployment would likely have detected and blocked the AI-generated malware in most enterprise environments, leaving independent developers and small Web3 teams as the segment most likely to encounter comparable intrusion attempts going forward.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime