Securing Enterprise Agents with NVIDIA OpenShell and Cisco AI Defense
Publish Time: 16 Mar, 2026

Enterprise Autonomous Agents: Powered by NVIDIA's Open Source AI Runtime and Secured by Cisco AI Defense

OpenClaw showed the world how autonomous, self-evolving agents are a step-change in how software works. Yet, in the enterprise, this type of power without governance isn't innovation; it's unmanaged risk. These agents are already live, running now -reading configurations, querying knowledge graphs, triggering compliance workflows, and reaching external tools.

The question is simple: do your controls match their access?

TheOpenShell open source agent runtime provides guardrails at the infrastructure level through isolated sandboxes for each agent, a fine-grained policy engine and a privacy router. Cisco AI Defense defines the boundaries, making sure and keeping a continuous record that agent behavior matches what policy permits as the agent reaches for additional skills and tools to meet its objectives.

Think of it this way. OpenShell constrains what agents can do. Cisco AI Defense enforces what they do and verifies what they did. Together, they make the answer to "can we trust this agent in a critical workflow?" provable, not probable.

Autonomous enterprise agents powered by NVIDIA OpenShell enforces the boundary.
Cisco AI Defense verifies everything within it.

What does this look like in action? Consider this fictional scenario:

It's Friday, 6:45 PM.

A critical Zero-day advisory bulletin drops.

In most organizations, this moment triggers a familiar chain reaction: someone pulls an asset list, someone else starts pinging the weekend rotation, and everyone quietly hopes the blast radius is small. The race is on, but it's a race typically run in the dark and in panic.

This post is about a different kind of Friday night.


Act I: Start from Truth, Not Panic

We've been preparing for this day. Before the security bulletin lands, Cisco's enterprise agents are already running quietly in the background.

In Cisco AI Canvas, a context agent has been continuously reading device configurations, ingesting show-command outputs, and mapping telemetry into a live knowledge graph. Every router, switch, and firewall in the environment is a node. Every dependency, version string, and role is a relationship.

So, when the new security advisory drops, we don't start from zero. We start from the known baseline with a live knowledge graph.

The agent already knows which devices are running which software versions. It understands which nodes sit at the edge, which are internal, and interdependencies. That context built incrementally and continuously over time is what makes the next step possible.

This is the core premise of autonomous long running agents, moving beyond a chatbot that simply answers questions, but a long-running agentic-powered system that accumulates understanding and then applies it when it matters most.


Act II: Reason Fast, Enforce Faster

The new advisory auto-triggers a security operations agent in Cisco AI Canvas that takes the bulletin and gets to work. It reads the security advisory, interprets the vulnerability logic, and begins mapping it against real device state pulled from the knowledge graph.

This isn't keyword matching. The agent:

  • Parses the bulletin to understand the conditions under which a device is vulnerable
  • Queries the knowledge graph to find matching devices
  • Evaluates blast radius, which devices are affected, and what do they connect to?
  • Plans remediation and recommends mitigations, by risk, reachability, and change impact

But the capability is only half the story; this entire reasoning workflow runs inside NVIDIA OpenShell, an open source sandbox environment designed specifically for autonomous, long-running agents.

OpenShell wraps the agent in runtime-enforced constraints:

  • Sandbox containment: The agent operates in a contained environment. It cannot reach outside its permitted boundary, limited on a need-to-know basis.
  • Deny-by-default access: The agent starts with zero permissions. It only gets access to what policy explicitly allows; nothing more.
  • Per-endpoint network policy: Tool calls are filtered against an approved list. Unverified packages are blocked.
  • Privacy routing: Sensitive data stays local. Prompts to cloud inference are anonymized to protect PII or proprietary data.

This is a crucial distinction. We are not trusting the model to do the right thing. We are constraining it so that the right thing is the only thing it can do. The agent doesn't need to be perfect. The sandbox, tools/skills verification ensures its imperfections stay contained, and critical enterprise configurations are handled with utmost care given the sensitivity of the advisory bulletin and new exposure risk.


Act III: Trust Verified, Not Assumed

Trust in this workflow doesn't begin when an attack is detected. It begins before the agent runs its first task.

Every tool, MCP server, and skill the agent is permitted to reach has been scanned and verified by Cisco AI Defense Supply Chain risk management capabilities before it ever receives a call. This isn't a one-time allow-list review; it's a continuous supply chain posture for AI tooling.

Consider the Report Generator: a third-party formatting skill that produces the final remediation output, a structured PDF with an executive summary, per-device findings, and patch sequencing. On the surface, it's the least threatening component in the workflow. But a compromised or poisoned version of this skill could silently omit critical findings from the report or embed exfiltration payloads in document metadata and no one would know until a device went unpatched.

This is the AI skills supply chain problem. The attack surface isn't just the reasoning model or the live tool calls. It's every dependency the agent touches including the ones that format the output. Only AI Defense verified skills are made available to the agent. If a skill hasn't been vetted, it doesn't appear in the catalog.

Now the agent moves from analysis to action, filing remediation tickets through what appears to be a legitimate internal ticketing integration, an approved MCP server in the pre-verified catalog. This is the most sensitive moment in the workflow: the agent is passing real device identifiers, vulnerability details, and network topology context into an external system outside the sandbox boundary.

AI Defense MCP tool call inspection is already watching, and it already knows what a valid call to this server looks like. It detects unexpected behavior in the outbound request, a covert exfiltration attempt, engineered to capture the sensitive device data the agent is transmitting at exactly the moment it has the most to send.

The inspection reveals a malicious signature embedded in the MCP payload, a prompt injection designed to exfiltrate device configuration data and redirect the agent's remediation recommendations, as this is an unexpected behavioral anomaly.

Here's what happens:

  • The MCP call is blocked at the AI Defense Gateway before any payload is processed
  • The workflow is contained, sensitive data never leaves the environment
  • An alert is created in AI Defense of the tool call for review
  • The agent continues operating on pre-verified trusted sources without interruption

The pre-verified trusted tool catalog does more than stop attacks. It closes the gap between what an agent should be able to do and what it can do at runtime.

This is the difference between deploying an agent and trusting an agent. OpenShell constrains what it can do at the infrastructure level. Cisco AI Defense verifies that everything it's allowed to reach was trustworthy before it got there and confirms it behaved as expected.

By 8:00 PM - a little over an hour after the bulletin dropped, the security team has:

  • A validated list of impacted devices, mapped against real configuration state
  • A dependency-aware remediation plan that accounts for network topology and prioritized by exposure risk
  • An audit-grade trace of every reasoning step, tool call, and decision point

The New Standard for the Autonomous Enterprise

Ultimately, the goal is to move beyond the 'black box' of AI. OpenShell provides the sandbox, and Cisco AI Defense provides the verification layer that makes autonomous agents safe for the enterprise. When you can prove exactly what an agent is doing-and why-you stop managing risk and start scaling innovation. That is the new standard for the autonomous enterprise.

I’d like Alerts: