It's been almost a year since we launched AgenticOps, an agent-first operating model for modern IT. In that time, it has helped teams operate more effectively in environments that are increasingly complex, interconnected, and always on.
And that complexity isn't slowing down.
At the same time, human attention and coordination are finite. As environments grow larger and more dynamic, the challenge isn't effort or expertise-it's scale. More dashboards add data, not clarity. Automation helps with repeatable tasks, but it breaks down when conditions change.
What teams need is always-on support that understands context across systems and can take a first pass at action: AI-powered digital teammates that absorb noise, connect signals, and operate within guardrails to help teams scale.
That's why we established AgenticOps last year.
But adding AI into operations raises an immediate question: how do you scale support without introducing new risk?
Why AI Fails Without Judgment
In real environments, most failures don't come from a lack of insight. They come from actions that were logically correct but operationally wrong, taken without enough context or at the wrong moment.
This is the gap that judgment fills.
Judgment answers the questions dashboards can't: "Is this the right time to act?" "What's the blast radius if we're wrong?" "What else could this affect?"
When judgment is missing, trust breaks down. And as agentic systems take on more responsibility, trust becomes the prerequisite for scale. For that trust to hold, judgment can't be sporadic or reactive. It must show up early, consistently, and before impact.
This is why AI-in-the-loop is essential: it augments and scales human capabilities, ensuring judgment is applied in time, every time, as environments change.
What It Takes to Make AI-in-the-Loop Real
When we introduced AgenticOps, we were explicit about what it would take to make AI-in-the-loop real: not another layer of automation, but an operating model designed for trust. At a foundational level, any trustworthy agentic system in operations needs three things: shared context, domain-aware reasoning, and governed execution.
That's why Cisco's approach starts with live, cross-domain telemetry spanning networking, security, observability, collaboration, and compute. This shared context ensures agentic systems reason from how users and applications are actually experiencing the environment and not from isolated or incomplete signals.
From there, we focused on intelligence built for operational reality. Instead of relying on a single general-purpose model to reason across everything, we combine domain-specific reasoning, like the Deep Network Model, built from real network behavior and operational expertise with frontier reasoning models. This allows the system to move quickly when speed matters, and to slow down when precision and risk demand deeper analysis.
In operations, trust is earned through reliable execution. That's why Cisco's AgenticOps includes deterministic, governed action through Agentic Workflows-so when systems act, they do so within clear policies, with explainable reasoning and auditable outcomes. Autonomy can be expanded deliberately, as confidence grows.
This wasn't about shipping features. It was about committing to an operating model and continuing to build what it takes to support it at scale. This week at Cisco Live Amsterdam, we launched expanded agentic capabilities to support autonomous troubleshooting, continuous optimization, and trusted validation.
This is AI-in-the-loop with governance, not guesswork.
A Day with AI-in-the-Loop
You walk in on a normal morning. There isn't a wall of alerts waiting for you. Your inbox isn't filling up with "any updates?" messages. Chat threads aren't lighting up with screenshots and half-formed theories.
Not because nothing is happening, but because the system is already paying attention.
Issues are investigated before they escalate. Changes are evaluated against live conditions before anyone touches production. Optimizations run quietly in the background, instead of waiting for a maintenance window that never quite arrives.
When something does need your attention, it's clear why-what's happening, what the impact is, and which options are safe. You're still doing the work, but you're starting from context, not noise.
This is the vision for where Cisco AgenticOps is heading and I'm proud of what we've already enabled.
Since launching last year, we introduced AI Canvas and the Deep Network Model. We expanded the AI Assistant-one of the fastest-adopted capabilities across Cisco platforms-into Meraki, ThousandEyes, Catalyst Center, as well as Cisco security, observability and Webex offerings. And we launched Agentic Workflows, now embedded across enterprise operational processes, with more than 165,000 automated executions running in production in the last 30 days alone.
These aren't experiments. They're systems supporting hospitals, factories, campuses, retailers, and global enterprises where failures wake people up at 3 a.m., halt production lines, and affect safety, revenue, and trust.
Why This Ultimately Matters
With AgenticOps and AI-in-the-loop, the first thing that changes isn't the technology-it's how the work feels.
When systems carry more of the continuous load-watching, correlating, validating-teams stop operating in a constant state of reaction. They gain time to think, to ask better questions, and to make decisions deliberately.
Teams stay in control, with speed, context, and confidence, knowing when to act, when not to, and how to move forward intentionally. This is the shift modern IT needs.
Cisco's AgenticOps is how we'll get them there.
