Nvidia has acquired SchedMD, the main commercial maintainer of the Slurm workload manager, and simultaneously released Nemotron 3 Nano, the first model in its third-generation “open” Nemotron family.
Together, the moves broaden Nvidia’s control over two choke points enterprises rely on to run AI at scale: how GPU clusters are scheduled and which model weights are deployed.
Slurm matters because it decides which jobs run when, where they land, and how efficiently expensive GPU capacity is used across thousands of accelerators.
Nvidia said Slurm is used in more than half of the top 10 and top 100 systems on the TOP500 supercomputer list, and says that the acquisition is a way to “strengthen the open-source software ecosystem” for HPC and AI.
Nvidia also said it will keep Slurm open-source and “vendor-neutral,” and continue offering support, training and development for SchedMD customers. SchedMD CEO Danny Auble called the deal “validation” of Slurm’s role in demanding HPC and AI environments, while saying Slurm will continue to be open source.
On the model side, Nvidia’s newsroom release positioned Nemotron 3 as an open stack, models, datasets and reinforcement learning tooling, built for multi-agent enterprise workflows where cost, long-context reliability and auditability matter.
Nvidia said Nemotron 3 Nano is a 30-billion-parameter model (up to 3 billion active parameters per token) with a 1-million-token context window and up to 4x higher token throughput than Nemotron 2 Nano. The weights are governed by Nvidia’s Open Model License, which the company describes as commercially usable and permitting creation and distribution of derivative models.
The timing points to competitive pressure on two fronts.
Nvidia is pushing these open releases as Chinese open models from groups including Alibaba and DeepSeek gain adoption, and as Meta is reported to be weighing a shift toward closed-source development. In October, Airbnb CEO Brian Chesky told Bloomberg the company is relying heavily on Alibaba’s Qwen model because it is “fast and cheap.”
For CIOs, the practical question is whether Nvidia’s “open” posture still increases dependency. Nvidia says it intends to support “heterogeneous clusters” with Slurm across “diverse hardware and software environments,” but ownership could give Nvidia influence over engineering priorities that can shape scheduler optimization, especially as hyperscalers invest in custom silicon and look to reduce exposure to Nvidia GPUs.
Nvidia is selling transparency as a differentiator. Multiple U.S. states and government entities have restricted or banned Chinese models, such as DeepSeek, on security grounds, creating demand for alternatives where training lineage and testing tooling are easier to scrutinize.
Financial terms of the acquisition were not disclosed. SchedMD was founded in 2010 by Slurm developers Morris “Moe” Jette and Danny Auble, in Livermore, California, and has about 40 employees. Nvidia said Nemotron 3 Super and Ultra are due in the first half of 2026, milestones enterprises can use to judge whether Nvidia sustains “open” commitments while tightening its grip on the AI software stack.