ANALYSIS

Study Finds AI Boosts Scientific Output but Narrows Research Scope by Five Percent

A Anika Patel Mar 24, 2026 Updated Apr 7, 2026 4 min read
Engine Score 7/10 — Important

This story is important due to its focus on designing AI for disruptive scientific advancements, offering valuable insights for researchers and developers in specialized fields. While the topic has significant potential impact and actionability for a niche audience, its novelty and source reliability are moderate, preventing a higher score.

Editorial illustration for: Study Finds AI Boosts Scientific Output but Narrows Research Scope by Five Percent
  • An analysis of 41 million research papers found that AI-augmented scientists publish more frequently and receive more citations, but collectively cover approximately 5% less topical ground than non-augmented researchers.
  • Alvin Djajadikerta, CEO of Evidentia Labs and Cambridge-trained molecular neuroscientist, argues that current AI tools accelerate incremental “normal science” while systematically reducing the diversity of research directions.
  • DeepMind’s GNoME project discovered 2.2 million new materials, but the vast majority were substitutions within already-known structure types rather than genuinely novel material classes.
  • The essay proposes designing AI systems that optimize for novelty and simplicity rather than prediction accuracy within existing frameworks.

What Happened

An essay published on March 23, 2026, by Alvin Djajadikerta in Asimov Press presents evidence that AI tools are simultaneously boosting scientific productivity and narrowing the scope of research. Djajadikerta, CEO of Evidentia Labs, a founding researcher at Science Works, and holder of a PhD in Molecular Neuroscience from Cambridge, draws on a study of 41 million research papers to argue that AI-augmented research is creating what he calls “hypernormal science” — accelerated incremental work within existing paradigms at the expense of genuinely disruptive discoveries.

The essay, published under the DOI 10.62211/29ej-27et, has drawn attention for framing a problem that the scientific community has discussed anecdotally but rarely quantified: the trade-off between AI-driven productivity gains and the diversity of ideas those gains produce.

Why It Matters

The findings challenge a widespread assumption that more AI in science automatically produces better science. While AI-augmented researchers publish more papers and accumulate more citations — both standard measures of scientific productivity — the collective research landscape is converging toward well-established topics. The approximately 5% reduction in topical coverage may appear modest in a single year, but Djajadikerta argues the effect compounds over time as more researchers adopt AI tools that optimize within existing frameworks rather than exploring new ones.

“Paradigm shifts require replacing these with simpler alternatives whose implications haven’t yet been explored,” Djajadikerta writes. He uses the metaphor of the London Underground map to illustrate the problem: AI can efficiently optimize routes between existing stations but cannot conceive of stations that do not yet exist. Scientific breakthroughs — the equivalent of building entirely new stations — require conceptual leaps that current AI architectures are not designed to produce.

Technical Details

Djajadikerta’s argument rests on a distinction between “normal science” — Thomas Kuhn’s term for incremental work within established frameworks — and “disruptive science” that creates entirely new paradigms. Current AI systems are trained to minimize prediction error against datasets with predefined labels, which effectively locks them into existing conceptual vocabularies. This makes them powerful tools for optimization but structurally incapable of generating the kind of conceptual shifts that characterize major scientific advances.

The essay cites DeepMind’s GNoME project as a telling illustration. GNoME discovered 2.2 million new materials, a result frequently presented as a triumph of AI-driven scientific discovery. However, Djajadikerta notes that the vast majority of these materials were substitutions within already-known crystal structure types — variations on existing themes rather than fundamentally new material classes. The sheer quantity was unprecedented, but the conceptual novelty was limited.

Historical examples reinforce the argument. James Clerk Maxwell condensed the known electromagnetic laws into four elegant equations, which implied the existence of electromagnetic waves and ultimately enabled radio technology. Albert Einstein’s special relativity displaced the prevailing luminiferous ether concept entirely. Both advances required simplification and radical reconceptualization of existing knowledge, not faster optimization within established parameter spaces.

Who’s Affected

Research institutions, funding agencies, and individual scientists who are integrating AI into their workflows face a strategic question: are they using AI to explore genuinely new intellectual territory or merely to move faster within familiar territory? The distinction has practical consequences for how research funding is allocated, how AI tools are designed and deployed for scientific use, and how the scientific community measures meaningful progress versus raw output volume.

The implications extend well beyond academia. Pharmaceutical companies, materials science laboratories, and climate research teams that rely heavily on AI-driven discovery pipelines may be systematically underexploring unconventional approaches. If AI tools steer research toward topics that are well-represented in their training data, emerging fields and contrarian hypotheses receive less attention by default — not through deliberate exclusion, but through algorithmic preference for the familiar and well-documented.

What’s Next

Djajadikerta proposes designing AI systems specifically for disruptive rather than incremental science — what he calls “visionary machines.” These tools would optimize for simplicity using the Minimum Description Length principle, enable cross-disciplinary analogies, and generate hypotheses that contradict existing scientific consensus rather than extending it. Whether such systems are technically feasible with current architectures remains an open question, and the essay identifies the problem more clearly than it demonstrates a viable path to solving it.

Related Reading

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime