- Google DeepMind assembled a specialized coding team led by Sebastian Borgeaud, who previously oversaw pre-training for Gemini models.
- An internal assessment concluded that Anthropic’s programming tools currently outperform Google’s own offerings.
- Co-founder Sergey Brin and DeepMind CTO Koray Kavukcuoglu are directly involved, with Brin mandating internal agent use for all Gemini engineers.
- Google is training models on its proprietary internal codebase — which cannot be publicly released — to narrow the performance gap.
What Happened
Google DeepMind assembled a dedicated team of researchers and engineers to sharpen the coding capabilities of its Gemini models, according to a report by The Decoder citing The Information, published in April 2026. The team is led by Sebastian Borgeaud, a DeepMind engineer who previously ran pre-training for the company’s models. The effort was prompted in part by an internal assessment that concluded Anthropic’s coding tools currently outperform Google’s own offerings.
Why It Matters
AI coding has emerged as a primary competitive front among major labs in 2026. OpenAI recently shut down its Sora video generator to redirect compute toward training and running other AI models, a resource reallocation that reflects how much weight the industry now places on coding performance. Anthropic’s Claude has established a strong foothold among software developers, particularly in agentic, multi-step coding contexts.
Technical Details
The team is targeting complex, long-horizon programming tasks — including writing new software from scratch — that require models to read multiple files and infer user intent across extended workflows. Google is increasingly training Gemini on its internal codebase, which differs substantially from the public repositories typically used to train general-purpose coding agents. Because the training data is proprietary, these internally trained model variants cannot be publicly released; they are intended instead as a development pipeline to improve models that eventually ship to users. Google also tracks adoption of its internal coding tool, “Jetski,” ranking teams by usage frequency — an approach analogous to Meta’s practice of monitoring token consumption as an internal performance metric.
Who’s Affected
Google’s engineering workforce faces direct operational changes: Brin required every Gemini engineer to use internal agents for complex, multi-step tasks, and some teams outside DeepMind have been required to attend AI training sessions. Developers and enterprises that currently rely on Anthropic’s Claude for software development workflows would face increased competition if Google narrows the capability gap. OpenAI, which competes directly in coding-focused AI products, is navigating similar internal pressures.
What’s Next
Brin framed stronger coding capabilities as a prerequisite for AI that can eventually improve itself. In an internal memo, he wrote: “To win the final sprint, we must urgently bridge the gap in agentic execution and turn our models into primary developers” of code. He described a longer-term scenario in which a capable coding agent, paired with AI that handles mathematical reasoning and runs experiments, could automate significant portions of AI research and engineering work — a goal that would depend on sustained internal adoption and the training data advantage Google is now trying to build.