- 360 Digital Security Group announced on April 22, 2026 that its AI systems identified approximately 1,000 vulnerabilities across widely used software, according to Bloomberg.
- Bloomberg’s reporting draws a parallel between 360’s approach and a separate AI vulnerability-hunting capability referred to as “Mythos.”
- The announcement positions 360 as a Chinese competitor in a field where US firms including Google and Anthropic have made documented advances since 2024.
- 360 has not publicly specified the affected software, vulnerability classes, severity thresholds, or its disclosure timeline as of publication.
What Happened
360 Digital Security Group, the Beijing-headquartered cybersecurity firm chaired by founder Zhou Hongyi, disclosed on April 22, 2026 that its AI systems had identified approximately 1,000 software vulnerabilities across widely deployed applications, according to Bloomberg. The company is positioning the capability as evidence that Chinese AI-powered security research has reached a scale comparable to similar programs at major US firms.
Bloomberg’s headline frames the development as “echoing Mythos,” a reference to a distinct AI-driven vulnerability-hunting capability the outlet has previously reported on. The comparison suggests 360 is replicating or approaching results achieved by that system in an automated, large-scale bug-discovery context.
Why It Matters
AI-assisted vulnerability discovery has become one of the more concrete demonstrations of LLM capability in applied security research. In November 2024, Google DeepMind published results showing that an AI agent it called Big Sleep — a successor to its Project Naptime framework — had independently discovered a stack buffer underflow vulnerability in SQLite that human researchers had not flagged. The model identified the flaw by reasoning over the codebase in a way that closely resembled a skilled human auditor.
Anthropic has separately described how its Claude models can support code analysis workflows relevant to vulnerability research, positioning AI assistants as accelerants for security teams operating at scale. 360’s announcement, if borne out by independent verification, would extend this AI-in-security-research arc to a Chinese firm operating outside the US regulatory and export-control environment.
Technical Details
The Bloomberg report, constrained by paywall access at time of publication, does not specify which software applications were targeted, what vulnerability classes were identified, or the severity distribution of the 1,000 reported findings. The absence of that granularity matters: a count of 1,000 vulnerabilities could reflect a mix of high-severity memory-safety flaws and low-severity informational issues, and the significance of the figure depends heavily on how 360 defines a confirmed vulnerability.
AI-based vulnerability research at scale typically relies on one or more techniques: LLM-guided static analysis that reasons over source or compiled code, fuzzing campaigns where AI models generate and prioritize test inputs, or hybrid methods combining symbolic execution with model inference. Producing 1,000 confirmed vulnerabilities — as opposed to candidate findings flagged for human triage — would require a validation pipeline capable of filtering false positives at meaningful precision.
360 Digital Security operates one of China’s larger proprietary cyberthreat intelligence networks and has invested in AI model development through its subsidiary 360 AI. The firm’s scale of data access — from endpoint telemetry across its installed base — provides a training and validation surface that smaller security-focused AI efforts may lack.
Who’s Affected
Software developers and vendors whose products contain the flagged vulnerabilities face potential exposure, depending on whether 360 pursues coordinated disclosure with affected parties before publication. The international security community’s standard practice — informing vendors privately and allowing a remediation window, typically 90 days under Google Project Zero’s model — has not been confirmed as 360’s approach here.
US cybersecurity firms and AI labs competing in automated vulnerability research, including Google’s Project Zero team and organizations building security tooling on top of Anthropic’s Claude, are now operating in a market where a Chinese competitor has publicly claimed comparable output. Enterprise software buyers evaluating AI-assisted security platforms will face a more complex vendor landscape as a result.
What’s Next
360 has not confirmed whether identified vulnerabilities have been reported to affected vendors or registered with CVE-tracking bodies such as MITRE’s CVE Program. That disclosure posture — and whether the firm shares technical details publicly — will determine how the broader research community can evaluate the claimed results. Bloomberg’s full reporting is expected to include additional technical specifics that were not available in the article’s summary metadata at publication time.