Marvell Technology has completed its $3.25 billion acquisition of Celestial AI, a company developing optical interconnect technology for AI data centers. The deal, finalized in late March 2026, gives Marvell access to photonic computing capabilities that address one of the most critical bottlenecks in AI infrastructure: moving data between chips fast enough to keep GPUs from sitting idle.
Celestial AI’s technology uses light instead of electrical signals to transfer data between processors, memory, and accelerators within data centers. Optical interconnects offer higher bandwidth, lower latency, and significantly lower power consumption than copper-based alternatives — advantages that become decisive at the scale of modern AI training clusters where thousands of GPUs must communicate continuously.
The acquisition positions Marvell as a vertically integrated provider of AI data center connectivity. The company already supplies networking chips, storage controllers, and custom silicon to hyperscale cloud providers. Adding Celestial AI’s optical technology allows Marvell to offer a complete data movement stack from chip-to-chip interconnects to rack-scale networking — a capability that NVIDIA is building internally through its NVLink and NVSwitch technologies.
At $3.25 billion, the deal reflects the premium that AI infrastructure companies command. Celestial AI was pre-revenue at the time of acquisition, with its technology still in the transition from prototype to production deployment. Marvell is paying for the technology’s potential to become essential infrastructure as AI clusters scale beyond what electrical interconnects can efficiently support.
The broader context is a race to solve AI’s data movement problem. Training frontier models requires moving petabytes of data between thousands of processors, and the speed of this data movement — not the speed of the processors themselves — increasingly determines training time and cost. Companies that solve optical interconnect at scale will capture a critical layer of the AI infrastructure stack that no amount of faster GPUs can substitute for.
