- Google co-founder Sergey Brin has personally assembled a DeepMind “strike team” to improve Gemini‘s coding capabilities, according to reporting by The Information.
- Research engineer Sebastian Borgeaud, formerly head of DeepMind’s pretraining group, is leading the effort under CTO Koray Kavukcuoglu.
- DeepMind researchers internally rate Claude‘s code output as superior to Gemini’s — a gap Brin cited as the core motivation for the initiative.
- Gemini engineers are now required to use Google’s internal agent tools on complex tasks, with adoption tracked on a company leaderboard called Jetski.
What Happened
Google co-founder Sergey Brin has formed a dedicated DeepMind “strike team” tasked with closing Gemini’s internal coding gap with Anthropic’s Claude, according to reporting from The Rundown AI citing The Information. Research engineer Sebastian Borgeaud, who previously ran DeepMind’s pretraining operations, is heading the group alongside CTO Koray Kavukcuoglu, with Brin directly involved in the effort.
In an internal memo described by The Information, Brin characterized improving Gemini’s coding ability as “the shortest route” to building AI systems capable of training subsequent AI models — framing the effort as foundational rather than a product-response to market competition.
Why It Matters
Google entered 2026 with reduced AI momentum after a strong late-2025 run that included the Gemini 2.0 rollout and expanded integrations across Google Workspace. The strike team’s mandate is primarily internal: to automate Google’s own engineering workflows as a prerequisite for recursive AI improvement, not to ship a faster consumer product.
Anthropic established a commercial foothold in software development workflows following the February 2025 release of Claude 3.7 Sonnet, which introduced extended thinking for complex reasoning tasks and was broadly adopted by development teams. That positioning appears to be the direct benchmark Brin’s team is targeting.
Technical Details
According to The Information, DeepMind researchers internally assess Claude’s code output as superior to Gemini’s — an internal acknowledgment that triggered Brin’s intervention. Borgeaud’s background in pretraining suggests the effort may extend beyond prompting and deployment changes to modifications in how future Gemini models are trained on code.
To drive in-house adoption, Google has mandated that Gemini engineers use the company’s internal agent tools when handling complex tasks. Usage is tracked on an internal leaderboard called Jetski, a mechanism that moves adoption measurement from optional benchmarks to live engineering workflows. Brin’s memo drew a direct line from coding performance to what he described as building AI that trains the next AI.
Who’s Affected
The most immediate effects are internal to Google: DeepMind researchers and Gemini engineers operating under new workflow mandates and usage-tracking requirements via Jetski. Enterprise development teams and AI tooling companies that have standardized on Claude for code generation, review, and testing represent the relevant external stakeholder group — should the initiative succeed in narrowing the gap, those teams would face a materially more competitive Gemini offering.
Anthropic and OpenAI, whose Codex agent targets overlapping agentic coding use cases, are the directly affected competitors. The move also signals Google’s internal acknowledgment that current Gemini coding performance is not at parity with frontier rivals.
What’s Next
No public deliverable or timeline has been announced for the strike team. Google I/O, expected in May 2026, is the next major forum at which Gemini coding advances could be disclosed publicly. Borgeaud’s pretraining background suggests any near-term progress may not be visible in API benchmarks until a new model generation is trained and released.