An essay published on March 23 by Alvin Djajadikerta, CEO of Evidentia Labs and a Cambridge-trained molecular neuroscientist, argues that current AI tools accelerate scientific productivity while systematically reducing the diversity of research directions. Drawing on a study of 41 million research papers, Djajadikerta notes that scientists using AI publish more frequently and receive more citations, but collectively, AI-augmented research covers approximately five percent less topical ground than human-only research.
The central argument distinguishes between “normal science” — incremental work within established frameworks — and “disruptive science” that creates new paradigms. AI excels at the former: optimizing experiments, analyzing datasets, and generating variations within known parameter spaces. But the pattern-matching approach that makes AI effective for optimization also makes it inherently conservative, gravitating toward well-represented topics in training data rather than unexplored territory.
Djajadikerta uses the metaphor of the London Underground map to illustrate the problem. AI can efficiently optimize routes between existing stations but cannot conceive of stations that do not yet exist. Scientific breakthroughs — the equivalent of building new stations — require conceptual leaps that current AI architectures are not designed to make.
The five percent reduction in topical coverage may seem modest, but Djajadikerta argues it compounds over time. As more researchers adopt AI tools that optimize within existing paradigms, the collective research landscape converges toward well-established topics, leaving emerging fields and unconventional hypotheses underexplored. The result is more papers but fewer genuinely new ideas.
The essay proposes designing AI systems specifically for disruptive rather than incremental science — tools that prioritize novelty, identify underexplored research directions, and generate hypotheses that contradict existing consensus rather than extending it. Whether such systems are technically feasible with current architectures remains an open question, but the framing offers a useful corrective to the assumption that more AI in science automatically means better science.
