Securing Agents & AI Supply Chain with Cisco AI Defense
Publish Time: 02 Dec, 2025

The conversation around AI and its enterprise applications has rapidly shifted focus to AI agents-autonomous AI systems that are not only capable of conversing, but also reasoning, planning, and executing autonomous actions. 

Our Cisco AI Readiness Index 2025 underscores this excitement, as 83% of companies surveyed already intend to develop or deploy AI agents across a variety of use cases. At the same time, these businesses are clear about their practical challenges: infrastructure limitations, workforce planning gaps, and of course, security. 

At a point in time where many security teams are still contending with AI security at a high level, agents expand the AI risk surface even further. After all, a chatbot can say something harmful, but an AI agent can do something harmful. 

We introduced Cisco AI Defense at the beginning of this year as our answer to AI risk-a truly comprehensive security solution for the development and deployment of enterprise AI applications. As this risk surface grows, we want to highlight how AI Defense has evolved to meet these challenges head-on with AI supply chain scanning and purpose-built runtime protections for AI agents. 

Below, we'll share real examples of AI supply chain and agent vulnerabilities, unpack their potential implications for enterprise applications, and share how AI Defense enables businesses to directly mitigate these risks. 

Identifying vulnerabilities in your AI supply chain 

Modern AI development relies on a myriad of third-party and open-source components such as models and datasets. With the advent of AI agents, that list has grown to include assets like MCP servers, tools, and more. 

While they make AI development more accessible and efficient than ever, third-party AI assets introduce risk. A compromised component in the supply chain effectively undermines the entire system, creating opportunities for code execution, sensitive data exfiltration, and other insecure outcomes. 

This isn't just theoretical, either. A few months ago, researchers at Koi Security identified the first known malicious MCP server in the wild. This package, which had already garnered thousands of downloads, included malicious code to discreetly BCC an unsanctioned third-party on every single email. Similar malicious inclusions have been found in open-source models, tool files, and various other AI assets. 

Cisco AI Defense will directly address AI supply chain risk by scanning model files and MCP servers in enterprise repositories to identify and flag potential vulnerabilities. 

By surfacing potential issues like model manipulation, arbitrary code execution, data exfiltration, and tool compromise, our solution helps prevent AI developers from building with insecure components. By integrating supply chain scanning tightly within the development lifecycle, businesses can build and deploy AI applications on a reliable and secure foundation. 

Safeguarding AI agents with purpose-built protections 

A production AI application is susceptible to any number of explicitly malicious attacks or unintentionally harmful outcomes-prompt injections, data leakage, toxicity, denial of service, and more. 

When we launched Cisco AI Defense, our runtime protection guardrails were specifically designed to protect against these scenarios. Bi-directional inspection and filtering prevented harmful content from both user prompts and model responses, keeping interactions with enterprise AI applications safe and secure. 

With agentic AI and the introduction of multi-agent systems, there are new vectors to consider: greater access to sensitive data, autonomous decision-making, and complex interactions between human users, agents, and tools. 

To meet this growing risk, Cisco AI Defense has evolved with purpose-built runtime protection for agents. AI Defense will function as a sort of MCP gateway, intercepting calls between an agent and MCP server to combat new threats like tool compromise. 

Let's drill into an example to better understand it. Imagine a tool which agents leverage to search and summarize content on the web. One of the websites searched contains discreet instructions to hijack the AI, a familiar scenario known as an "indirect prompt injection." 

With simple AI chatbots, indirect prompt injections might spread misinformation, elicit a harmful response, or distribute a phishing link. With agents, the potential grows-the prompt might instruct the AI to steal sensitive data, distribute malicious emails, or hijack a connected tool.  

Cisco AI Defense will protect these agentic interactions on two fronts. Our previously existing AI guardrails will monitor interactions between the application and model, just as they have since day one. Our new, purpose-built agentic guardrails will examine interactions between the model and MCP server to ensure that those too are safe and secure. 

Our goal with these new capabilities is unchanged-we want to enable businesses to deploy and innovate with AI confidently and without fear. Cisco remains at the forefront of AI security research, collaborating with AI standards bodies, leading enterprises, and even partnering with Hugging Face to scan every public file uploaded to the world's largest AI repository. Combining this expertise with decades of Cisco's networking leadership, AI Defense delivers an AI security solution that is comprehensive and done at a network level.   

For those interested in MCP security, check out an open-source version of our MCP Scanner that you can get started with today. Enterprises looking for a more comprehensive solution to address their AI and agentic security concerns should schedule time with an expert from our team. 

Many of the products and features described herein remain in varying stages of development and will be offered on a when-and-if-available basis.

I’d like Alerts: