
Run Safer: OpenShell runtime for enterprise AI agents
Enterprises are rolling out autonomous, self‑evolving AI agents that need strict guardrails around data, tools, and execution. NVIDIA OpenShell introduces a control layer for this class of systems, and the OpenShell runtime for enterprise AI agents is positioned to reduce risks like data leakage, unsafe tool access, and prompt injection while keeping deployments auditable [1][4].
Why the OpenShell runtime for enterprise AI agents matters
Long‑running agents interact with files, networks, credentials, and external models, which can magnify exposure to exfiltration, privilege misuse, and cross‑session contamination. OpenShell sits between agents and the underlying infrastructure, applying out‑of‑process, policy‑based checks on what an agent can see and do, and where inference runs [1].
What is NVIDIA OpenShell? Key components at a glance
OpenShell is an Apache 2.0 licensed, open source runtime that enforces safety controls independently of the agent code path [1][4]. It provides three core pieces:
- Sandbox for isolated execution
- Policy Engine for runtime permission checks
- Privacy Router to direct workloads to local or external models based on privacy, cost, and compliance rules [1]
This architecture targets enterprise AI agent security without requiring invasive changes to agent implementations [1].
Sandbox: isolated agent sessions and deny‑by‑default controls
The Sandbox constrains filesystem, process, and network access using a deny‑by‑default model, per‑endpoint network policies, and scoped credentials [1]. Sessions are isolated so a compromised or misbehaving agent cannot directly affect others. These controls limit agents’ ability to install arbitrary packages or reach sensitive systems, reducing risks from prompt injection, unsafe tool use, or data leakage [1].
Policy Engine: out‑of‑process checks and auditable enforcement
OpenShell’s Policy Engine performs out‑of‑process, runtime checks that agents cannot bypass or rewrite, which turns governance into actively enforced policy rather than guidance [1]. Because enforcement is external to the agent, it supports auditable records of allowed and denied actions, helping organizations demonstrate compliance during evaluation and production [1][3].
Privacy Router: routing inference to local vs external models
The Privacy Router applies privacy, cost, and compliance policies to route requests between local and external models, balancing data sensitivity with cost control [1]. This is relevant where regulated or confidential data must stay on premises while other workloads can leverage external services [1].
Integration with NVIDIA Agent Toolkit and models
OpenShell underpins NVIDIA’s broader Agent Toolkit, which includes models such as Nemotron, tools, evaluation harnesses, and blueprints for building autonomous agents that plan and execute multi‑step tasks [1][2][4]. Existing coding and productivity agents, including OpenClaw, Claude Code, or Codex, can run unmodified inside OpenShell through a sandbox command, enabling safer trials and iterative deployments [1].
One H2 for SEO: OpenShell runtime for enterprise AI agents
Enterprises can adopt the OpenShell runtime for enterprise AI agents without rewriting agents, then layer policies that restrict package installation, API use, and network reachability. The approach aims to convert implicit trust into explicit, testable controls for high‑impact workloads [1].
Enterprise use cases and partner examples
Cisco integrates OpenShell with Cisco AI Defense to define policies, validate real‑time agent compliance, monitor behavior, and create auditable records of actions, moving safety from assumed to demonstrable outcomes [3]. Cohesity is using OpenShell to pursue AI resilience, allowing powerful autonomous agents to manage backup and recovery workflows while constraining API access, package installation, and credential use [5]. Broader enterprise adoption includes vendors such as SAP and ServiceNow evaluating the runtime for agent deployments [4].
Deployment options: on‑prem, DGX/RTX, and cloud partners
Organizations can deploy on‑premises, including DGX systems, RTX workstations, and PCs, or run through cloud partners, providing flexibility for data residency and compliance needs [1][4]. This makes the OpenShell runtime for enterprise AI agents practical for mixed estate environments where some workloads must stay local while others can burst to cloud [1][4].
Security benefits and tradeoffs — what operators should know
- Start with deny‑by‑default policies on filesystem, process, and network scopes, then grant only what specific tools need [1].
- Use scoped credentials to prevent lateral movement across sessions and services [1].
- Keep policy enforcement out of process so agents cannot alter controls during execution [1].
- Route sensitive data to local models and offload non‑sensitive tasks to external endpoints per Privacy Router rules [1].
These measures help prevent data exfiltration and unsafe tool use, though operators still need monitoring, policy iteration, and incident review to maintain posture over time [1][3].
How to get started: repo, licensing, and first pilots
OpenShell is available as an Apache 2.0 open source runtime on GitHub, making it straightforward to evaluate and audit [1][4]. A practical first step is to run existing coding or productivity agents inside the Sandbox via the provided command, then layer policies to constrain package installation, API calls, and network access before moving to production pilots [1]. For licensing reference, see the Apache License 2.0 (external).
For additional context on agent architectures and deployment patterns, Explore AI tools and playbooks.
Sources
[1] Run Autonomous, Self-Evolving Agents More Safely with NVIDIA OpenShell
https://developer.nvidia.com/blog/run-autonomous-self-evolving-agents-more-safely-with-nvidia-openshell/
[2] NVIDIA expands Agent Toolkit for open enterprise AI agents
https://www.stocktitan.net/news/NVDA/nvidia-ignites-the-next-industrial-revolution-in-knowledge-work-with-fm2cpwobav2m.html
[3] Securing Enterprise Agents with NVIDIA OpenShell and Cisco AI Defense
https://blogs.cisco.com/ai/securing-enterprise-agents-with-nvidia-and-cisco-ai-defense
[4] NVIDIA Expands Enterprise AI Push with OpenShell and Agent Software
https://adtmag.com/articles/2026/03/18/nvidia-expands-enterprise-ai-push-with-openshell-and-agent-software.aspx
[5] Cohesity taps NVIDIA OpenShell to build AI resilience
https://www.cohesity.com/blogs/cohesity-taps-nvidia-openshell-to-build-ai-resilience/