BLOG

400 Gbps Silicon Photonics Could Solve AI’s Biggest Hidden Bottleneck

N Nikhil B Apr 5, 2026 2 min read
Engine Score 7/10 — Important
Editorial illustration for: 400 Gbps Silicon Photonics Could Solve AI's Biggest Hidden Bottleneck

Coherent Corp expanded its supply deal with NVIDIA following a breakthrough in 400 Gbps silicon photonics — technology that uses light instead of electricity to transfer data within AI clusters. The AI infrastructure bottleneck is shifting from compute (how fast GPUs process) to interconnect (how fast thousands of GPUs talk to each other).

What Silicon Photonics Is

Traditional data center interconnects use copper cables and electrical signals. Silicon photonics replaces electrical signals with laser light traveling through silicon waveguides. The advantages:

  • Speed: Light travels faster and carries more data than electrical signals
  • Energy: Photonic interconnects consume 50-70% less power per bit transferred
  • Distance: Optical signals degrade much slower over distance, enabling larger cluster architectures
  • Heat: Less electrical resistance means less waste heat — a critical constraint in dense GPU racks

The 400 Gbps Milestone

Coherent’s breakthrough achieves 400 Gbps per lane using silicon photonics. For context:

  • Current standard: 100 Gbps per lane (NRZ/PAM4 electrical)
  • Coherent’s achievement: 400 Gbps per lane (4x improvement)
  • Full link capacity: With 8 lanes per port, this enables 3.2 Tbps per port

In an AI cluster with 10,000 GPUs, the interconnect determines how fast gradients synchronize during training and how quickly inference requests distribute across nodes. A 4x improvement in per-lane bandwidth means training runs complete faster and inference latency drops — without adding more GPUs.

The NVIDIA Partnership

NVIDIA’s AI infrastructure — from DGX systems to the Blackwell platform — relies on high-speed interconnects between GPUs. The expanded Coherent deal suggests NVIDIA will integrate 400 Gbps silicon photonics into next-generation systems, likely the Blackwell Ultra and Rubin platforms expected in late 2026 and 2027.

MLPerf v6.0 benchmarks showed multi-node submissions increasing 30%, with the largest system spanning 72 nodes and 288 accelerators. At that scale, interconnect bandwidth becomes the limiting factor, not individual GPU performance.

Infrastructure Companies Positioned to Win

The shift from compute bottleneck to interconnect bottleneck creates opportunities for:

  • Coherent Corp (COHR): Leading silicon photonics supplier with the NVIDIA deal
  • Broadcom (AVGO): Custom networking silicon for hyperscaler interconnects
  • Arista Networks (ANET): High-speed switching infrastructure for AI clusters
  • Lumentum (LITE): Optical components for data center networks

What This Changes

The practical implication: AI training and inference will get faster without needing new GPUs. Silicon photonics improvements in the interconnect layer provide performance gains that compound with GPU improvements rather than competing with them. For companies building AI infrastructure, interconnect investment is now as important as GPU procurement — a shift that most AI strategy discussions still haven’t absorbed.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime

NB
Nikhil B

Founder of MegaOne AI. Covers AI industry developments, tool launches, funding rounds, and regulation changes. Every story is sourced from primary documents, fact-checked, and rated using the six-factor Engine Score methodology.

About Us Editorial Policy