Microsoft Research has developed MOSAIC, a MicroLED-based optical interconnect system that uses approximately 50 percent less energy than the laser-based optical cables currently connecting servers in AI data centers. The technology, developed in collaboration with Azure Core, Azure Hardware Systems, and MediaTek, is expected to reach commercialization with industry partners by late 2027.
The energy savings target a growing bottleneck in AI infrastructure: the interconnects that move data between GPUs, switches, and storage arrays within a data center. As AI training clusters scale to tens of thousands of GPUs, the optical cables connecting them consume a significant and increasing share of total facility power. Current systems use semiconductor lasers to transmit data as light through fiber optic cables — a technology that is fast but power-hungry, generating heat that requires additional cooling and limiting the density at which servers can be packed together.
MOSAIC replaces semiconductor lasers with microLED emitters, which are fundamentally more energy-efficient light sources. MicroLEDs convert electrical energy to light with less waste heat than laser diodes, operate at lower voltages, and can be manufactured at higher densities on a single chip. The system maintains the bandwidth performance of laser-based interconnects while halving the power consumption per link — a reduction that compounds across the thousands of interconnections in a modern AI training cluster.
The research is led by Paolo Costa, a Partner Research Manager at Microsoft, with oversight from Doug Burger, Technical Fellow and Corporate Vice President at Microsoft Research. The involvement of MediaTek as a manufacturing partner suggests the technology will leverage existing semiconductor fabrication infrastructure rather than requiring entirely new production facilities, which should accelerate the path from research to deployment.
For the AI industry, the timing addresses a practical constraint. Data center operators are struggling to secure sufficient power for new AI facilities, with interconnection queues stretching years in most U.S. markets. A technology that reduces power consumption by 50 percent on a major subsystem effectively increases the compute capacity that can be deployed within existing power envelopes. If MOSAIC delivers on its efficiency claims at scale, Microsoft could deploy significantly more GPU capacity in its Azure data centers without proportional increases in power infrastructure — a competitive advantage in a market where power availability is the binding constraint on AI compute expansion.
