Microsoft announced in a blog that it brought the Fairwater AI datacenter in Atlanta online in October and connected it with its first Fairwater site in Wisconsin to run as a unified AI superfactory. 

The article describes a dedicated AI WAN with 120,000 miles of fiber added in the past year, a two-story design for higher GPU density, and NVIDIA GB200 NVL72 rack-scale systems that can scale to hundreds of thousands of GPUs across sites.

“This is about building a distributed network that can act as a virtual supercomputer for tackling the world’s biggest challenges,” said Alistair Speirs, Microsoft general manager focusing on Azure infrastructure.

The network supports OpenAI, the Microsoft AI Superintelligence Team, and Copilot workloads, with additional Fairwater sites under construction to join the AI WAN. Microsoft’s architecture post adds that Fairwater uses a single flat network to integrate current-generation Blackwell GPUs with 800 Gbps connectivity, and links sites via an optical backbone so multiple locations can cooperate on near-real-time model training.