ClearML Introduces Floating NVIDIA AI Enterprise License Management with One-click NVIDIA NIM Deployments

March 19, 2026

ClearML has announced native floating license management for NVIDIA AI Enterprise licenses with one-click deployment of NVIDIA NIM microservices across AI infrastructure. The feature, available now to ClearML enterprise customers, fundamentally changes how organizations consume NVIDIA AI Enterprise software licenses, moving from a static per-GPU assignment model to a dynamic pool that follows active workloads.

The problem with static license assignment

NVIDIA AI Enterprise is a software suite that runs on top of GPU hardware, providing the optimized runtimes, security, and support that enterprises need to deploy AI at scale, including NIM containers for model inference. Licenses for NVIDIA AI Enterprise are tied to the GPUs they run on.

In practice, most enterprises don’t need all of their GPUs running NVIDIA AI Enterprise workloads simultaneously. A cluster of 100 GPUs might have dozens sitting idle at any given moment, between training runs, overnight, or while teams wait in queue. Under a static assignment model, licenses attached to those idle GPUs sit unused, while other teams may be waiting to run NIM workloads that need a license to proceed.

The inefficiency isn’t in how many licenses you have; it’s in how they’re deployed. Floating licenses solve this by decoupling the license from the hardware and attaching it to the workload instead.

How floating license management works

With ClearML managing the license lifecycle, the flow works like this:

  • A team launches a NIM workload, say, running inference on a Nemotron model for a production application or an internal fine-tuning job.
  • ClearML automatically draws a license from the central pool and assigns it to that workload for its duration. No manual license request, no IT ticket, no waiting.
  • When the workload completes or goes idle, the license is automatically returned to the pool, immediately available for the next team or project that needs it.
  • This repeats across teams and environments: on-premises, cloud, or hybrid, with the entire license pool managed from a single ClearML control plane.

The practical upshot: instead of needing a license for every GPU in your fleet, you need licenses sized to the number of workloads your organization is likely to run concurrently. For most enterprises, actual concurrent utilization is a fraction of total GPU count, meaning the same license pool serves a much larger infrastructure footprint, used more productively.

One-click NIM deployment on any infrastructure

Alongside the floating license management, ClearML now supports one-click deployment of NVIDIA NIM microservices: pre-packaged, optimized inference containers for NVIDIA-supported models including Nemotron open models and NVIDIA AI Blueprints. NIM containers are designed to run optimized model inference at scale, with performance tuned for specific GPU architectures and minimal operational overhead.

Combining one-click NIM deployment with automated license management means teams can go from “I need to run this model” to a running inference endpoint without touching license configuration, hunting for available capacity, or waiting for IT to provision access. ClearML’s orchestration layer handles it.

Why this matters for enterprise AI operations

License management might sound like an administrative detail, but for enterprises running AI at scale it creates real operational friction. Floating licenses address several common pain points:

  • Distributed teams don’t need to coordinate who has licenses: the pool is shared and allocation is automatic. A data science team in one region doesn’t block an MLOps team in another from running their NIM workloads.
  • Multi-environment deployments are first-class: the same license pool spans on-premises, cloud, and hybrid infrastructure. Teams don’t need separate license allocations per environment.
  • License utilization becomes visible: with all allocations flowing through ClearML’s control plane, organizations can see exactly how licenses are being consumed across teams and workloads, supporting better planning.
  • AI initiatives can scale without license bottlenecks: as new teams and projects come online, they draw from the same pool rather than requiring dedicated license provisioning for each new workload.

“NVIDIA AI Enterprise makes it easy for teams to deploy optimized model inference at scale,” said Moses Guttmann, CEO and Co-Founder of ClearML. “With ClearML’s floating license management, teams can now fully unlock that potential, efficiently and on-demand. Organizations can orchestrate NVIDIA AI Enterprise workloads across their entire infrastructure and scale their deployment of these optimized containers as their AI initiatives grow.”

Availability

ClearML’s NVIDIA AI Enterprise floating license management is available now to enterprise customers. Organizations can learn more about NVIDIA AI Enterprise and request a ClearML demo at clear.ml/demo.

Facebook
Twitter
LinkedIn
Scroll to Top