- NVIDIA donated its Dynamic Resource Allocation (DRA) Driver for GPUs to the CNCF, making GPU scheduling in Kubernetes fully open-source and community-governed.
- The company pledged $4 million over three years to provide GPU access for CNCF projects, alongside support from AWS, Google Cloud, Microsoft, Red Hat, and others.
- The DRA driver enables fine-grained GPU allocation in containerized environments, replacing static allocation that leaves expensive GPUs idle between jobs.
- New companion announcements include GPU support for Kata Containers, the KAI Scheduler entering CNCF Sandbox, and Dynamo 1.0 for large-scale AI workloads.
What Happened
NVIDIA announced at KubeCon Europe in Amsterdam on March 24, 2026, that it is donating its Dynamic Resource Allocation (DRA) Driver for GPUs to the Cloud Native Computing Foundation under the Kubernetes project. The move transfers the driver from vendor-governed development to full community ownership, allowing any organization to contribute to and modify the GPU scheduling layer that sits between Kubernetes and the underlying hardware.
Justin Boitano, the author of NVIDIA’s announcement, outlined additional commitments alongside the donation: a $4 million pledge over three years to fund GPU access for CNCF projects, GPU support for Kata Containers developed with the Confidential Containers community, and the onboarding of the NVIDIA KAI Scheduler as a CNCF Sandbox project. NVIDIA also released Dynamo 1.0, NemoClaw, and OpenShell as open-source tools for orchestrating large-scale AI workloads.
Why It Matters
GPU resource management has been one of the weakest aspects of Kubernetes for AI workloads. Most organizations rely on static allocation, assigning GPUs to specific containers regardless of actual usage, which leaves expensive hardware idle between jobs. The DRA driver enables dynamic allocation where GPUs are assigned on demand and reclaimed when unused, improving utilization rates significantly in mixed workloads.
Chris Aniszczyk, Chief Technology Officer of the CNCF, called the donation “a major milestone,” noting that community governance ensures the driver evolves based on real-world needs rather than a single vendor’s roadmap. Chris Wright, CTO and Senior Vice President of Global Engineering at Red Hat, added that “open source will be at the core of every successful enterprise AI strategy, bringing standardization to infrastructure.”
Technical Details
The DRA driver handles how compute resources get allocated to containerized workloads, supporting NVIDIA’s Multi-Process Service (MPS) and Multi-Instance GPU (MIG) technologies. MPS allows multiple containers to share a single GPU concurrently, while MIG partitions a physical GPU into isolated instances with dedicated memory and compute resources. The driver targets Kubernetes 1.32 and later, with full DRA support requiring version 1.33 or newer.
The companion Kata Containers integration adds GPU support to lightweight virtual machines that provide stronger workload isolation than standard container runtimes. This matters for organizations running sensitive AI workloads that require hardware-level separation between tenants, such as healthcare or financial services deployments where data isolation is a regulatory requirement.
Who’s Affected
Amazon Web Services, Broadcom, Canonical, Google Cloud, Microsoft, Nutanix, Red Hat, and SUSE are collaborating with NVIDIA on the effort. Ricardo Rocha, Lead of Platforms Infrastructure at CERN, noted that “for organizations like CERN, where efficiently analyzing petabytes of data is essential, community-driven innovation helps accelerate science.”
The donation directly benefits any organization running AI training or inference on Kubernetes, from startups sharing a handful of GPUs across teams to enterprises managing thousands of GPUs across multiple clusters. It also affects NVIDIA’s competitors: by establishing the reference implementation for Kubernetes GPU scheduling, NVIDIA ensures the ecosystem is optimized for its hardware architecture first.
What’s Next
The CNCF’s governance structure means the project will accept contributions from all vendors, but NVIDIA’s head start in driver maturity gives it a structural advantage in how Kubernetes handles GPU workloads. AMD and Intel would need to either contribute compatible drivers or build alternative scheduling solutions. The additional CNCF Sandbox projects, particularly the KAI Scheduler, suggest NVIDIA intends to shape not just hardware allocation but the broader orchestration layer for AI infrastructure on Kubernetes.
The timing coincides with growing competition from AMD and Intel in the data center GPU market. By donating the DRA driver, NVIDIA ensures that Kubernetes GPU scheduling is optimized for its hardware architecture first, establishing a reference implementation that competitors would need to match or adapt. The Grove project, a new Kubernetes API for orchestrating AI workloads announced alongside the donation, further extends NVIDIA’s influence over how cloud-native infrastructure handles GPU-intensive applications at scale.