Cisco explores the expanding threat landscape of AI security for 2026 with its latest annual report
Publish Time: 19 Feb, 2026

Thank you to all of the contributors of the State of AI Security 2026, including Amy Chang, Tiffany Saade, Emile Antone, and the broader Cisco AI research team.

As artificial intelligence (AI) technology and enterprise AI adoption advance at a rapid pace, the security landscape around it is expanding faster, leaving many defenders struggling to keep up. Last year, we introduced our inaugural State of AI Security report to help security professionals, business leaders, policymakers, and the broader community make sense of this novel and complex field-and prepare for what comes next. 

In fact, a lot can change in a year. 

Today, we are proud to share the State of AI Security 2026, our flagship report that builds upon the foundational analysis covered in last year's edition. 

This publication sheds light on the AI threat landscape, a snapshot in time, but one that marks the beginnings of a major paradigm shift in AI security. The confluence of rapid AI adoption, untested boundaries and limits of AI, non-existent norms of behavior around AI security and safety, and existing cybersecurity risk requires a fundamental change to how companies approach digital security. As the report details, AI vulnerabilities and exploits conceptualized within the confines of a research lab have materialized, evidenced by numerous reports of AI compromise and AI-enabled malicious campaigns from the second half of 2025. Other notable developments-the proliferation of agentic AI, changes in government regulation, and growing attacker interest in AI, for example-have further complicated the situation. 

Like its predecessor, the State of AI Security 2026 explores new and notable advancements across AI threat intelligence, global AI policy, and AI security research. In this blog, we provide a preview of some of the areas covered in our latest report. 

Threats to AI applications and agentic systems 

At the outset of 2025, the industry was characterized by a profound dissonance between AI adoption and AI readiness. While 83 percent of organizations we surveyed had planned to deploy agentic AI capabilities into their business functions, only 29 percent of organizations felt they were truly ready to leverage these technologies securely. Organizations that rushed to integrate LLMs into critical workflows may have bypassed traditional security vetting processes in favor of speed, sowing a fertile ground for security lapses and opening the door for adversarial campaigns. 

Today, AI capabilities exceed the conceptual boundaries of previously available systems. Generative AI is accelerating rapidly, often without proper testing and evaluation, supply chains are growing in complexity, often without proper controls and governance, and powerful, autonomous AI agents are proliferating across critical workflows, often without accountability being ensured. The potential for immense value in these systems comes with an equally massive risk surface for organizations to contend with. 

The State of AI Security 2026 dives into the evolution of prompt injection attacks and jailbreaks of AI systems. It also examines the fragility of the modern AI supply chain, highlighting vulnerabilities that can be found in datasets, open-source models, tools, and various other AI components. We also look at the growing risk surface of Model Context Protocol (MCP) agentic AI and note how adversaries can use agents to execute attack campaigns with tireless efficiency. 

An innovation-first approach for global AI policy 

Against the backdrop of an evolving threat landscape, and as agentic and generative AI technologies introduce new security complexities, the State of AI Security 2026 report also examines three major AI players' approaches to AI policy: the United States, European Union, and the People's Republic of China. The trajectory of AI governance in 2025 represented a definitive shift, with preceding years defined by a stronger emphasis on AI safety-non-binding agreements and regulation that were intended to protect constitutional or fundamental rights. In 2025, we witnessed a global repositioning towards innovation and investment in AI development while still contending with the inherent security and safety concerns that generative AI may pose through misaligned model behavior or malicious activity such as developing deepfakes for social engineering. 

The United States, under a new administration, is focused on fostering an environment that encourages innovation over regulation, pivoting away from more stringent safety frameworks and relying on existing laws. In the European Union (EU), following the ratification of the EU AI Act, there was broad political consensus for the need to simplify rules and stimulate AI investing, including through public funding. China has pursued a dual-track strategy of deeply integrating AI via state planning while simultaneously erecting a sophisticated digital apparatus to manage the social risks of anthropomorphic and emotional AI. As our report explores, each of these three regulatory blocs has adopted a distinct national-level approach to AI development reflecting political systems, economic priorities, and normative values. 

AI security research and tooling at Cisco 

Over the last year, the Cisco AI Threat Intelligence & Security Research team has both pioneered and contributed to threat research and open-source models and tools. These initiatives map directly to some of the most critical contemporary AI security challenges, including AI supply chain vulnerability, agentic AI risk, and the weaponization of AI by attackers. 

The State of AI Security 2026 report gives a succinct overview of some of the latest releases by our team. These include research into open-weight model vulnerabilities, which sheds light on how various models remain susceptible to jailbreaks and prompt injections, especially over lengthier conversations. It also covers four open-source projects: a structure-aware pickle fuzzer that generates adversarial pickle files and scanners for MCP, A2A, and agentic skill files to help secure the AI supply chain. 

Get the report 

Ready to read the full State of AI Security report for 2026? Check it out here.

I’d like Alerts: