Follow : Add us as a preferred source on Google.
Key takeaways
- In 2026, weaponized AI will cause unprecedented harm.
- Malicious AI agents will evade detection as they roam networks.
- CISOs must upskill their teams to deal with AI-related threats.
Looking back at the biggest cybersecurity breaches and intrusions of 2025, here's what I wonder: Will those trends continue unabated into the new year? Or, will 2026 be full of new surprises as threat actors attempt to stay one step ahead of the cybersecurity pros trying to anticipate their next move?
According to the threat intelligence and cybersecurity experts I've talked to, it's likely to be a bit of each. And it should come as no surprise that artificial intelligence topped the threat list for many researchers.
Also: How these state AI safety laws change the face of regulation in the US
[For this report, I checked in with seven organizations, all trusted sources for my cybersecurity reporting during 2025.]
Threat actors started using AI in 2025. It'll get much worse in 2026
The weaponization of AI in 2025 appears poised to turn an evolutionary corner in 2026, making previous generations of malware appear benign by comparison.
Also: Weaponized AI risk is 'high,' warns OpenAI - here's the plan to stop it
"In 2026 and beyond, threat actor use of AI is expected to transition decisively from the exception to the norm, noticeably transforming the cyber threat landscape," noted security leaders at Google's Mandiant and Threat Intelligence Group (GTIG). "We anticipate that actors will fully leverage AI to enhance the speed, scope, and effectiveness of operations, building upon the robust evidence and novel use cases observed in 2025. This includes social engineering, information operations, and malware development."
"Additionally," Google continued, "we anticipate threat actors will increasingly adopt agentic systems to streamline and scale attacks by automating steps across the attack lifecycle. We may also begin to see other AI threats increasingly being discussed in security research, such as prompt injection and direct targeting of the models themselves."
Floris Dankaart, lead product manager in NCC's Managed Extended Detection and Response Group, said: "2025 marked the first large-scale AI-orchestrated cyber espionage campaign, where Anthropic's Claude was used to infiltrate global targets. It was already apparent that tools for such a campaign were being developed (for example, "Villager"). This trend will continue in 2026, and AI's use as a sword will be followed by an increase in AI's use as a shield."
Across various sites, Villager is discussed as the likely AI-native heir to the Cobalt Strike throne. Cobalt Strike is an automated penetration-testing tool widely used by cybersecurity pros to emulate threat actor behavior and gauge an organization's responses upon detection. Unfortunately, Cobalt Strike was also weaponized by malicious actors.
In contrast to Cobalt Strike, however, Villager has AI in its DNA and is therefore viewed by the cybersecurity community as a more capable alternative. However, much the same way Cobalt Strike has been weaponized for illicit activities, Villager could be poised to do as much, if not more, harm than good. This is especially true given Villager's Chinese origins. China is well-known for its sprawling cyber-espionage initiatives, and there's a distinct possibility that Villager was developed with the intent to do more harm than good.
Also: Anthropic to Claude: Make good choices!
"While Anthropic's recent report on a Chinese nation-state threat actor's use of AI in a campaign lacked details, it demonstrated the continued evolutionary role of AI in attack chains and was the simplest attack we'll see moving into the future," noted LastPass senior principal analyst Mike Kosak. According to Kosak, the cybersecurity community is already off to a bad start in its attempts to stay one step ahead of malicious actors. "Right now, threat actors are learning the technology and setting the bar," he said.
In all, my conversations with threat intelligence and cybersecurity experts identified 10 areas of vulnerability that deserve every business leader's attention in 2026.
1. AI-enabled malware will unleash havoc
2025 was a pivotal year for AI-enabled malware, a category of malware that is noteworthy for either preying on victims' use of AI or using AI itself to conduct its malicious activities. In November 2025, GTIG published a summary of its AI-involved malware observations, noting that "adversaries are no longer leveraging AI just for productivity gains; they are deploying novel AI-enabled malware in active operations." That shift, according to GTIG, "marks a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution."
The report goes on to identify several such malware by name, including Fruitshell, Promptflux, Promptlock, and PromptSteal, the last of which has been observed in the wild using a large language model (LLM) to generate one-line PowerShell commands capable of finding and exfiltrating sensitive data from Windows-based computers.
Also: Your phishing detection skills are no match for the biggest security threats
"In 2026, threat actors will increasingly deploy AI-enabled malware in active operations," noted LastPass cyber threat intelligence analyst Stephanie Schneider. "This AI can generate scripts, alter codes to avoid detection, and create malicious functions on demand. Nation-state actors have used AI-powered malware to adapt, alter, and pivot campaigns in real-time, and these campaigns are expected to improve as the technology continues to develop. AI-powered malware will likely become more autonomous in 2026, ultimately increasing the threat landscape for defenders."
It's that ability of AI-enabled malware to dynamically adapt, morph, and change attack strategies that is extremely worrisome. At the very least, human defenders are dealing with their own species when battling against the wits and speed of other humans. But those defenders will increasingly find themselves at a significant speed and scale disadvantage once a threat actor's payload can autonomously adapt to countermeasures and to human presence at machine speeds.
"Malicious code is predicted to become increasingly 'self-aware,' utilizing advanced calculations to verify the presence of a human user before executing," Picus Security co-founder and VP Suleyman ?zarslan told . "Instead of blindly detonating, malware will likely analyze interaction patterns to distinguish between actual humans and automated analysis environments. This evolution suggests that automated sandboxes will face significant challenges, as threats will simply remain dormant or 'play dead' upon detecting the sterile inputs typical of security tools, executing only when convinced they are unobserved."
2. Agentic AI is evolving into every threat actor's fantasy
While AI-enabled malware is of grave concern, the growing reliance of threat actors on agentic AI also warrants significant attention. According to the aforementioned report from Anthropic, the Claude LLM developer discovered how attackers were using agentic AI to execute their cyberattacks.
"The threat actor -- whom we assess with high confidence was a Chinese state-sponsored group -- manipulated our Claude Code tool into attempting infiltration into roughly 30 global targets and succeeded in a small number of cases," wrote the authors of Anthropic's report. "The operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention."
Also: AI's scary new trick: Conducting cyberattacks instead of just helping out
Alex Cox, director of Threat Intelligence, Mitigation, and Escalation at LastPass, echoed that warning: "Defenders will likely see threat actors use agentic AI in an automated fashion as part of intrusion activities, continue AI-driven phishing campaigns, and continue development of advanced AI-enabled malware. They'll use agentic AI to implement hacking agents that support their campaigns through autonomous work. In 2026, attackers will shift from passive use of AI in preparation activities to automation of campaigns and the evolution of their tactics, techniques, and procedures (TTPs)."
From the threat actor's point of view, agentic AI seems nearly purpose-built for one key malicious TTP: lateral movement. According to a CrowdStrike post, "Lateral movement refers to the techniques that a cyberattacker uses, after gaining initial access, to move deeper into a network in search of sensitive data and other high-value assets. After entering the network, the attacker maintains ongoing access by moving through the compromised environment and obtaining increased privileges using various tools. It allows a threat actor to avoid detection and retain access, even if discovered on the machine that was first infected. And with a protracted dwell time, data theft might not occur until weeks or even months after the original breach."
It's not hard to imagine how agentic AI could be a threat actor's fantasy when coupled with such lateral movement.
"AI is both the biggest accelerator and the biggest wildcard. Threat actors will increasingly use AI agents to automate reconnaissance, phishing, lateral movement, and malware development, making attacks faster, adaptive, and harder to detect," wrote NCC director Nigel Gibbons.
Perhaps one of the biggest fears related to agentic AI will be the extent to which end users may inadvertently expose sensitive information and assets during the deployment of their own agents without IT oversight.
Also: 96% of IT pros say AI agents are a security risk, but they're deploying them anyway
"By 2026, we expect the proliferation of sophisticated AI Agents will escalate the shadow AI problem into a critical 'shadow agent' challenge. In organizations, employees will independently deploy these powerful, autonomous agents for work tasks, regardless of corporate approval," wrote Google's cybersecurity experts. "This will create invisible, uncontrolled pipelines for sensitive data, potentially leading to data leaks, compliance violations, and IP theft."
Unfortunately, banning agentic AI will not be an option. Between its promise of delivering greatly improved efficiency and executive pressure to derive competitive advantage from AI, end-users will likely respond by taking matters into their own hands if they're not sufficiently enabled by their IT departments.
According to AppOmni director of AI Melissa Ruzzi, there will be "increased pressure from users expecting AI agents to become more powerful, and organizations under pressure to develop and release agents to production as fast as possible. And it will be especially true for AI agents running in SaaS environments, where sensitive data is likely already present and misconfigurations may already pose a risk."
3. Prompt injection: AI tools will be the new attack surface
More to Google's point about how agentic AI deployments may lead to new "data leaks, compliance violations, and IP theft," any time new, supplemental platforms are layered onto an organization's existing IT stack, that organization will need to deal with an expansion in vulnerable surface areas.
"By trying to make AI as powerful as it can be, organizations may misconfigure settings, leading to overpermissions and data exposure. They may also grant too much power to one AI, creating a major single point of failure," wrote AppOmni's Ruzzi. "In 2026, we'll see other AI security risks heighten even more, stemming from excessive permissions granted to AI and a lack of instructions provided to it about how to choose and use tools, potentially leading to data breaches."
Also: Are AI browsers worth the security risk? Why experts are worried
Meanwhile, AI-enabled malware might not be possible were it not for the incremental surface area created by organizational or shadow IT adoption of large language models.
"While AI promises unprecedented growth, it also introduces new, sophisticated risks. One of the most critical is prompt injection, a cyberattack that essentially manipulates AI, making it bypass its security protocols and follow an attacker's hidden command," wrote Google's cybersecurity leaders. "This isn't just a future threat; it's a present danger, and we anticipate a significant rise in these attacks throughout 2026. The increasing accessibility of powerful AI models and the growing number of businesses integrating them into daily operations create perfect conditions for prompt injection attacks. Threat actors are rapidly refining their techniques, and the low-cost, high-reward nature of these attacks makes them an attractive option. We anticipate a rise in targeted attacks on enterprise AI systems in 2026, as attackers move from proof-of-concept exploits to large-scale data exfiltration and sabotage campaigns."
In many cases, the sanctioned or unsanctioned introduction of AI as a supplemental platform introduces a far more passive form of surface area -- the one created when untrained users feed proprietary corporate information into a publicly shared LLM. Such was the case when Samsung engineers prompted ChatGPT with sensitive source code, thereby exposing that code to the wider community of ChatGPT users. According to Dark Reading, one of the engineers "pasted buggy source code from a semiconductor database into ChatGPT, with a prompt to the chatbot to fix the errors." The Dark Reading post goes on to describe how "information ends up as training data for the [LLM in a way that] someone could later retrieve the data using the right prompts." In other words, through such misprompting, the organization's vulnerable surface area is expanded to include public services beyond its control.
2026 is also the year during which the fusion of AI to web browsers could present new defense challenges. Between new entries into the market -- such as ChatGPT's Atlas and the transformation of existing entries like Chrome, Edge, and Firefox into AI front-ends -- SquareX founder Vivek Ramachandran sees their adoption as fait accompli.
Also: Gartner urges businesses to 'block all AI browsers' - what's behind the dire warning
"Even if advisory firms like Gartner caution against using these tools inside corporate environments, history suggests adoption will be inevitable -- security has never been able to fully stop productivity-driven tool adoption, especially when companies feel pressured to use the 'latest and greatest' to keep up," Ramachandran told .
"AI browsers will become the default, not a niche category," he continued. "They'll introduce a new and unusually powerful attack surface because they blend browsing with autonomous actions, sensitive corporate context with external content, and agent-driven decisions with execution capability. This shift will create a major headache for existing enterprise security solutions, because most security stacks today were not designed for browsers that act like agents."
4. Threat actors will use AI to go after the weakest link - humans
Threat actors are still having a relatively easy time with attacks that start as social engineering campaigns but end with extremely damaging credential thefts. However, in 2026, almost as if to take their social engineering TTPs to an entirely new level, threat actors are expected to enhance their social engineering efforts with AI.
Also: Battered by cyberattacks, Salesforce faces a trust problem - and a potential class action lawsuit
"In 2026, we anticipate sophisticated threat actors like ShinyHunters (aka, UNC6240) will accelerate the use of highly manipulative AI-enabled social engineering, making it a significant threat," noted Google cybersecurity leaders. "The key to their success in 2025 was avoiding technical exploits and instead focusing on human weaknesses, particularly through voice phishing. Vishing is poised to incorporate AI-driven voice cloning to create hyperrealistic impersonations, notably of executives or IT staff. This approach will be exacerbated by the increasing use of AI in other aspects of social engineering. This includes reconnaissance, background research, and the crafting of realistic phishing messages. AI allows for scalable, customized attacks that bypass traditional security tools, as the focus is on human weaknesses rather than the technology stack."
According to Pindrop CEO and co-founder Vijay Balasubramaniyan, 70% of confirmed healthcare fraud now originates from bots. The bot activity was bad enough. But once AI is added as a main ingredient, Balasubramaniyan anticipates things will get a lot worse.
"Bot activity surged 9,600% in the second half of 2025 across some of our largest customers, demonstrating how quickly AI-based fraud scales once deployed," he told . "In 2026, I predict that the majority of enterprise fraud will originate from interactions with AI-driven bots capable of natural conversation, real-time social engineering, and automated account takeover. Instead of isolated human attacks, intelligent AI bots are probing systems, interacting with humans, and draining accounts continuously."
5. AI will expose APIs as a too-easily-exploited point of attack
While humans will always be the weakest link in any system, application programming interfaces (APIs) may not be far behind -- especially undocumented or unofficial ones. The tasklet.ai AI agent authoring and hosting service, for example, can create AI agents of just about any kind (relying on just about any service). That capability is enabled, surprisingly, by an even more impressive superpower -- its ability to automatically discover and leverage just about any API. As tasklet founder Andrew Lee described it to me, if tasklet needs access to a service in order to launch an AI agent, that service doesn't necessarily need to offer an API that was intentionally designed to offer programmatic access. Tasklet just relies on AI to figure it out.
Also: The coming AI agent crisis: Why Okta's new security standard is a must-have for your business
Does this sound trivial? I assure you that it's not. Over the last 15 years, billions have been spent on the art of developer relations and on delivering the best possible developer experiences (DXs) to maximize the consumability of APIs and grease the wheels of software integration and composable applications. Even the innovation of the model context protocol (MCP) was a response to the need for better DXs for universal programmatic access between software and AI. But if you heard Andrew Lee explain how tasklet works, you'd soon realize that the idea of APIs, MCP, and optimal DXs is probably dead.
Not only does tasklet independently figure out how to programmatically access a service (again, even when APIs for that service don't exist), it automatically builds and hosts the integration -- in the context of agentic AI. I spent 15 years doing meaningful work in the belly of the API economy. Or so I thought. When I saw tasklet for the first time, I immediately wondered if all that work was a complete waste of time.
Here's the point: If Andrew Lee at tasklet can do it, so can threat actors. After seeing how tasklet works, it's not hard to imagine them harnessing AI to not only discover your programmable interfaces (whether you know about them or not), but to write the code that exploits them.
"Command and control infrastructures will likely undergo a major transformation as adversaries shift to 'living off the cloud,' routing malicious traffic through the APIs of widely trusted services," Picus Security's ?zarslan told . "By masking communications within legitimate development and operational traffic to major cloud providers and AI platforms, attackers will render traditional blocklists and firewall rules ineffective. This trend indicates a future where distinguishing between authorized business activity and active backdoor signaling will require deep content inspection rather than simple reputation-based filtering."
Also: OpenAI user data was breached, but changing your password won't help - here's why
Echoing earlier comments from NCC's Gibbons, NCC's Dankart said, "Expect campaigns to leverage AI for adaptive payloads and lateral movement across industrial networks." Programmable interfaces, like APIs, exist at the base of that lateral movement food chain -- for legitimate as well as illegitimate actors.
"While 2025 was the year of the agent, 2026 will be the year of interactions," said NCC's technical director and head of AI and ML David Brauchler. "Multi-agent systems are growing in popularity with the advent of [API] standards like MCP, and agents are being granted access to higher-trust operations, such as online transactions via Agent Commerce Protocol (ACP). We are likely to see agents grow in their capabilities, privileges, and communication complexity over the next year. And their risk profile will grow alongside them."
6. Extortion tactics will evolve from ransomware encryption
According to research from Cybersecurity Ventures, the global total cost of ransomware damage is expected to increase by 30%, from $57 billion in 2025 to $74 billion in 2026. By 2031, the firm expects those costs to rise to as much as $276 billion. For some organizations, ransomware isn't just a threat to the bottom line; it's a threat to the business's survival. In July 2025, a ransomware attack forced the 158-year-old British transport company KNP to permanently shut its doors, resulting in 700 employees losing their jobs.
"As a form of extortion, ransomware will continue to evolve and cross-link with AI. Expect an early wave of 'agentic malware' and AI-augmented ransomware campaigns," said NCC's Gibbons, Referring to a practice known as ransomware encryption (threat actors lock organizations out of their own systems by encrypting those systems until a ransom is paid), Gibbons added, "Instead of just encrypting systems, ransomware will shift towards greater dynamics in stealing, manipulating and threatening to leak or alter sensitive data, targeting backups, cloud services and supply chains."
Also: No one pays ransomware demands anymore - so attackers have a new goal
PIcus Security's ?zarslan agrees that 2026 will bring a shift in extortion tactics. "The volume of ransomware encryption attacks is expected to decrease significantly in 2026 as adversaries pivot their business models," he told . "Rather than relying on the disruptive tactic of locking systems, ransomware will likely prioritize silent data theft for extortion, valuing long-term persistence over immediate chaos. This strategic shift suggests that attackers will focus on maintaining a quiet foothold within networks to exfiltrate sensitive assets undetected, effectively keeping the host operational for prolonged exploitation instead of causing an immediate shutdown."
From Google's point of view, ransomware, data theft, and multifaceted extortion will combine in 2026 to be the most financially disruptive category of global cybercrime. More often than not, such disruptions involve a so-called blast radius that extends outward from the initial attack.
"This is due not only to the sustained quantity of incidents, but also to the cascading economic fallout that consistently impacts suppliers, customers, and communities beyond the initial victim," noted Google's cybersecurity leaders. "The 2,302 victims listed on data leak sites (DLS) in Q1 2025 represented the highest single-quarter count observed since we began tracking these sites in 2020, confirming the maturity o...
