10 ways AI can inflict unprecedented damage in 2026

3 days ago 11
Danger tape
MirageC/Moment/Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET's key takeaways

  • In 2026, weaponized AI will cause unprecedented harm.
  • Malicious AI agents will evade detection as they roam networks.
  • CISOs must upskill their teams to deal with AI-related threats.

Looking back at the biggest cybersecurity breaches and intrusions of 2025, here's what I wonder: Will those trends continue unabated into the new year? Or, will 2026 be full of new surprises as threat actors attempt to stay one step ahead of the cybersecurity pros trying to anticipate their next move? 

According to the threat intelligence and cybersecurity experts I've talked to, it's likely to be a bit of each. And it should come as no surprise that artificial intelligence topped the threat list for many researchers.  

Also: How these state AI safety laws change the face of regulation in the US

[For this report, I checked in with seven organizations, all trusted sources for my cybersecurity reporting during 2025.]  

Threat actors started using AI in 2025. It'll get much worse in 2026

The weaponization of AI in 2025 appears poised to turn an evolutionary corner in 2026, making previous generations of malware appear benign by comparison. 

Also: Weaponized AI risk is 'high,' warns OpenAI - here's the plan to stop it

"In 2026 and beyond, threat actor use of AI is expected to transition decisively from the exception to the norm, noticeably transforming the cyber threat landscape," noted security leaders at Google's Mandiant and Threat Intelligence Group (GTIG). "We anticipate that actors will fully leverage AI to enhance the speed, scope, and effectiveness of operations, building upon the robust evidence and novel use cases observed in 2025. This includes social engineering, information operations, and malware development."

"Additionally," Google continued, "we anticipate threat actors will increasingly adopt agentic systems to streamline and scale attacks by automating steps across the attack lifecycle. We may also begin to see other AI threats increasingly being discussed in security research, such as prompt injection and direct targeting of the models themselves."

Floris Dankaart, lead product manager in NCC's Managed Extended Detection and Response Group, said: "2025 marked the first large-scale AI-orchestrated cyber espionage campaign, where Anthropic's Claude was used to infiltrate global targets. It was already apparent that tools for such a campaign were being developed (for example, "Villager"). This trend will continue in 2026, and AI's use as a sword will be followed by an increase in AI's use as a shield."

Across various sites, Villager is discussed as the likely AI-native heir to the Cobalt Strike throne. Cobalt Strike is an automated penetration-testing tool widely used by cybersecurity pros to emulate threat actor behavior and gauge an organization's responses upon detection. Unfortunately, Cobalt Strike was also weaponized by malicious actors. 

In contrast to Cobalt Strike, however, Villager has AI in its DNA and is therefore viewed by the cybersecurity community as a more capable alternative. However, much the same way Cobalt Strike has been weaponized for illicit activities, Villager could be poised to do as much, if not more, harm than good. This is especially true given Villager's Chinese origins. China is well-known for its sprawling cyber-espionage initiatives, and there's a distinct possibility that Villager was developed with the intent to do more harm than good.

Also: Anthropic to Claude: Make good choices!

"While Anthropic's recent report on a Chinese nation-state threat actor's use of AI in a campaign lacked details, it demonstrated the continued evolutionary role of AI in attack chains and was the simplest attack we'll see moving into the future," noted LastPass senior principal analyst Mike Kosak. According to Kosak, the cybersecurity community is already off to a bad start in its attempts to stay one step ahead of malicious actors. "Right now, threat actors are learning the technology and setting the bar," he said. 

In all, my conversations with threat intelligence and cybersecurity experts identified 10 areas of vulnerability that deserve every business leader's attention in 2026.   

1. AI-enabled malware will unleash havoc

2025 was a pivotal year for AI-enabled malware, a category of malware that is noteworthy for either preying on victims' use of AI or using AI itself to conduct its malicious activities. In November 2025, GTIG published a summary of its AI-involved malware observations, noting that "adversaries are no longer leveraging AI just for productivity gains; they are deploying novel AI-enabled malware in active operations." That shift, according to GTIG,  "marks a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution."

The report goes on to identify several such malware by name, including Fruitshell, Promptflux, Promptlock, and PromptSteal, the last of which has been observed in the wild using a large language model (LLM) to generate one-line PowerShell commands capable of finding and exfiltrating sensitive data from Windows-based computers. 

Also: Your phishing detection skills are no match for the biggest security threats

"In 2026, threat actors will increasingly deploy AI-enabled malware in active operations," noted LastPass cyber threat intelligence analyst Stephanie Schneider. "This AI can generate scripts, alter codes to avoid detection, and create malicious functions on demand. Nation-state actors have used AI-powered malware to adapt, alter, and pivot campaigns in real-time, and these campaigns are expected to improve as the technology continues to develop. AI-powered malware will likely become more autonomous in 2026, ultimately increasing the threat landscape for defenders."

It's that ability of AI-enabled malware to dynamically adapt, morph, and change attack strategies that is extremely worrisome. At the very least, human defenders are dealing with their own species when battling against the wits and speed of other humans. But those defenders will increasingly find themselves at a significant speed and scale disadvantage once a threat actor's payload can autonomously adapt to countermeasures and to human presence at machine speeds.

"Malicious code is predicted to become increasingly 'self-aware,' utilizing advanced calculations to verify the presence of a human user before executing," Picus Security co-founder and VP Süleyman Özarslan told ZDNET. "Instead of blindly detonating, malware will likely analyze interaction patterns to distinguish between actual humans and automated analysis environments. This evolution suggests that automated sandboxes will face significant challenges, as threats will simply remain dormant or 'play dead' upon detecting the sterile inputs typical of security tools, executing only when convinced they are unobserved."

2. Agentic AI is evolving into every threat actor's fantasy

While AI-enabled malware is of grave concern, the growing reliance of threat actors on agentic AI also warrants significant attention. According to the aforementioned report from Anthropic, the Claude LLM developer discovered how attackers were using agentic AI to execute their cyberattacks.

"The threat actor -- whom we assess with high confidence was a Chinese state-sponsored group -- manipulated our Claude Code tool into attempting infiltration into roughly 30 global targets and succeeded in a small number of cases," wrote the authors of Anthropic's report. "The operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention."

Also: AI's scary new trick: Conducting cyberattacks instead of just helping out

Alex Cox, director of Threat Intelligence, Mitigation, and Escalation at LastPass, echoed that warning: "Defenders will likely see threat actors use agentic AI in an automated fashion as part of intrusion activities, continue AI-driven phishing campaigns, and continue development of advanced AI-enabled malware. They'll use agentic AI to implement hacking agents that support their campaigns through autonomous work. In 2026, attackers will shift from passive use of AI in preparation activities to automation of campaigns and the evolution of their tactics, techniques, and procedures (TTPs)." 

From the threat actor's point of view, agentic AI seems nearly purpose-built for one key malicious TTP: lateral movement. According to a CrowdStrike post, "Lateral movement refers to the techniques that a cyberattacker uses, after gaining initial access, to move deeper into a network in search of sensitive data and other high-value assets. After entering the network, the attacker maintains ongoing access by moving through the compromised environment and obtaining increased privileges using various tools. It allows a threat actor to avoid detection and retain access, even if discovered on the machine that was first infected. And with a protracted dwell time, data theft might not occur until weeks or even months after the original breach." 

It's not hard to imagine how agentic AI could be a threat actor's fantasy when coupled with such lateral movement. 

"AI is both the biggest accelerator and the biggest wildcard. Threat actors will increasingly use AI agents to automate reconnaissance, phishing, lateral movement, and malware development, making attacks faster, adaptive, and harder to detect," wrote NCC director Nigel Gibbons. 

Perhaps one of the biggest fears related to agentic AI will be the extent to which end users may inadvertently expose sensitive information and assets during the deployment of their own agents without IT oversight. 

Also: 96% of IT pros say AI agents are a security risk, but they're deploying them anyway

"By 2026, we expect the proliferation of sophisticated AI Agents will escalate the shadow AI problem into a critical 'shadow agent' challenge. In organizations, employees will independently deploy these powerful, autonomous agents for work tasks, regardless of corporate approval," wrote Google's cybersecurity experts. "This will create invisible, uncontrolled pipelines for sensitive data, potentially leading to data leaks, compliance violations, and IP theft." 

Unfortunately, banning agentic AI will not be an option. Between its promise of delivering greatly improved efficiency and executive pressure to derive competitive advantage from AI, end-users will likely respond by taking matters into their own hands if they're not sufficiently enabled by their IT departments.

According to AppOmni director of AI Melissa Ruzzi, there will be "increased pressure from users expecting AI agents to become more powerful, and organizations under pressure to develop and release agents to production as fast as possible. And it will be especially true for AI agents running in SaaS environments, where sensitive data is likely already present and misconfigurations may already pose a risk." 

3. Prompt injection: AI tools will be the new attack surface

More to Google's point about how agentic AI deployments may lead to new "data leaks, compliance violations, and IP theft," any time new, supplemental platforms are layered onto an organization's existing IT stack, that organization will need to deal with an expansion in vulnerable surface areas.  

"By trying to make AI as powerful as it can be, organizations may misconfigure settings, leading to overpermissions and data exposure. They may also grant too much power to one AI, creating a major single point of failure," wrote AppOmni's Ruzzi. "In 2026, we'll see other AI security risks heighten even more, stemming from excessive permissions granted to AI and a lack of instructions provided to it about how to choose and use tools, potentially leading to data breaches."

Also: Are AI browsers worth the security risk? Why experts are worried

Meanwhile, AI-enabled malware might not be possible were it not for the incremental surface area created by organizational or shadow IT adoption of large language models. 

"While AI promises unprecedented growth, it also introduces new, sophisticated risks. One of the most critical is prompt injection, a cyberattack that essentially manipulates AI, making it bypass its security protocols and follow an attacker's hidden command," wrote Google's cybersecurity leaders. "This isn't just a future threat; it's a present danger, and we anticipate a significant rise in these attacks throughout 2026. The increasing accessibility of powerful AI models and the growing number of businesses integrating them into daily operations create perfect conditions for prompt injection attacks. Threat actors are rapidly refining their techniques, and the low-cost, high-reward nature of these attacks makes them an attractive option. We anticipate a rise in targeted attacks on enterprise AI systems in 2026, as attackers move from proof-of-concept exploits to large-scale data exfiltration and sabotage campaigns."

In many cases, the sanctioned or unsanctioned introduction of AI as a supplemental platform introduces a far more passive form of surface area -- the one created when untrained users feed proprietary corporate information into a publicly shared LLM. Such was the case when Samsung engineers prompted ChatGPT with sensitive source code, thereby exposing that code to the wider community of ChatGPT users. According to Dark Reading, one of the engineers "pasted buggy source code from a semiconductor database into ChatGPT, with a prompt to the chatbot to fix the errors." The Dark Reading post goes on to describe how "information ends up as training data for the [LLM in a way that] someone could later retrieve the data using the right prompts." In other words, through such misprompting, the organization's vulnerable surface area is expanded to include public services beyond its control. 

2026 is also the year during which the fusion of AI to web browsers could present new defense challenges. Between new entries into the market -- such as ChatGPT's Atlas and the transformation of existing entries like Chrome, Edge, and Firefox into AI front-ends -- SquareX founder Vivek Ramachandran sees their adoption as fait accompli. 

Also: Gartner urges businesses to 'block all AI browsers' - what's behind the dire warning

"Even if advisory firms like Gartner caution against using these tools inside corporate environments, history suggests adoption will be inevitable -- security has never been able to fully stop productivity-driven tool adoption, especially when companies feel pressured to use the 'latest and greatest' to keep up," Ramachandran told ZDNET. 

"AI browsers will become the default, not a niche category," he continued. "They'll introduce a new and unusually powerful attack surface because they blend browsing with autonomous actions, sensitive corporate context with external content, and agent-driven decisions with execution capability. This shift will create a major headache for existing enterprise security solutions, because most security stacks today were not designed for browsers that act like agents."

4. Threat actors will use AI to go after the weakest link - humans

Threat actors are still having a relatively easy time with attacks that start as social engineering campaigns but end with extremely damaging credential thefts. However, in 2026, almost as if to take their social engineering TTPs to an entirely new level, threat actors are expected to enhance their social engineering efforts with AI.  

Also: Battered by cyberattacks, Salesforce faces a trust problem - and a potential class action lawsuit

"In 2026, we anticipate sophisticated threat actors like ShinyHunters (aka, UNC6240) will accelerate the use of highly manipulative AI-enabled social engineering, making it a significant threat," noted Google cybersecurity leaders. "The key to their success in 2025 was avoiding technical exploits and instead focusing on human weaknesses, particularly through voice phishing. Vishing is poised to incorporate AI-driven voice cloning to create hyperrealistic impersonations, notably of executives or IT staff. This approach will be exacerbated by the increasing use of AI in other aspects of social engineering. This includes reconnaissance, background research, and the crafting of realistic phishing messages. AI allows for scalable, customized attacks that bypass traditional security tools, as the focus is on human weaknesses rather than the technology stack."

According to Pindrop CEO and co-founder Vijay Balasubramaniyan, 70% of confirmed healthcare fraud now originates from bots. The bot activity was bad enough. But once AI is added as a main ingredient, Balasubramaniyan anticipates things will get a lot worse.

"Bot activity surged 9,600% in the second half of 2025 across some of our largest customers, demonstrating how quickly AI-based fraud scales once deployed," he told ZDNET. "In 2026, I predict that the majority of enterprise fraud will originate from interactions with AI-driven bots capable of natural conversation, real-time social engineering, and automated account takeover. Instead of isolated human attacks, intelligent AI bots are probing systems, interacting with humans, and draining accounts continuously."

5. AI will expose APIs as a too-easily-exploited point of attack 

While humans will always be the weakest link in any system, application programming interfaces (APIs) may not be far behind -- especially undocumented or unofficial ones. The tasklet.ai AI agent authoring and hosting service, for example, can create AI agents of just about any kind (relying on just about any service). That capability is enabled, surprisingly, by an even more impressive superpower -- its ability to automatically discover and leverage just about any API. As tasklet founder Andrew Lee described it to me, if tasklet needs access to a service in order to launch an AI agent, that service doesn't necessarily need to offer an API that was intentionally designed to offer programmatic access. Tasklet just relies on AI to figure it out. 

Also: The coming AI agent crisis: Why Okta's new security standard is a must-have for your business

Does this sound trivial? I assure you that it's not. Over the last 15 years, billions have been spent on the art of developer relations and on delivering the best possible developer experiences (DXs) to maximize the consumability of APIs and grease the wheels of software integration and composable applications. Even the innovation of the model context protocol (MCP) was a response to the need for better DXs for universal programmatic access between software and AI. But if you heard Andrew Lee explain how tasklet works, you'd soon realize that the idea of APIs, MCP, and optimal DXs is probably dead. 

Not only does tasklet independently figure out how to programmatically access a service (again, even when APIs for that service don't exist), it automatically builds and hosts the integration -- in the context of agentic AI. I spent 15 years doing meaningful work in the belly of the API economy. Or so I thought. When I saw tasklet for the first time, I immediately wondered if all that work was a complete waste of time. 

Here's the point: If Andrew Lee at tasklet can do it, so can threat actors. After seeing how tasklet works, it's not hard to imagine them harnessing AI to not only discover your programmable interfaces (whether you know about them or not), but to write the code that exploits them. 

"Command and control infrastructures will likely undergo a major transformation as adversaries shift to 'living off the cloud,' routing malicious traffic through the APIs of widely trusted services," Picus Security's Özarslan told ZDNET. "By masking communications within legitimate development and operational traffic to major cloud providers and AI platforms, attackers will render traditional blocklists and firewall rules ineffective. This trend indicates a future where distinguishing between authorized business activity and active backdoor signaling will require deep content inspection rather than simple reputation-based filtering."

Also: OpenAI user data was breached, but changing your password won't help - here's why

Echoing earlier comments from NCC's Gibbons, NCC's Dankart said, "Expect campaigns to leverage AI for adaptive payloads and lateral movement across industrial networks." Programmable interfaces, like APIs, exist at the base of that lateral movement food chain -- for legitimate as well as illegitimate actors. 

"While 2025 was the year of the agent, 2026 will be the year of interactions," said NCC's technical director and head of AI and ML David Brauchler. "Multi-agent systems are growing in popularity with the advent of [API] standards like MCP, and agents are being granted access to higher-trust operations, such as online transactions via Agent Commerce Protocol (ACP). We are likely to see agents grow in their capabilities, privileges, and communication complexity over the next year. And their risk profile will grow alongside them."

6. Extortion tactics will evolve from ransomware encryption

According to research from Cybersecurity Ventures, the global total cost of ransomware damage is expected to increase by 30%, from $57 billion in 2025 to $74 billion in 2026. By 2031, the firm expects those costs to rise to as much as $276 billion. For some organizations, ransomware isn't just a threat to the bottom line; it's a threat to the business's survival. In July 2025, a ransomware attack forced the 158-year-old British transport company KNP to permanently shut its doors, resulting in 700 employees losing their jobs. 

"As a form of extortion, ransomware will continue to evolve and cross-link with AI. Expect an early wave of 'agentic malware' and AI-augmented ransomware campaigns," said NCC's Gibbons, Referring to a practice known as ransomware encryption (threat actors lock organizations out of their own systems by encrypting those systems until a ransom is paid), Gibbons added, "Instead of just encrypting systems, ransomware will shift towards greater dynamics in stealing, manipulating and threatening to leak or alter sensitive data, targeting backups, cloud services and supply chains."

Also: No one pays ransomware demands anymore - so attackers have a new goal

PIcus Security's Özarslan agrees that 2026 will bring a shift in extortion tactics. "The volume of ransomware encryption attacks is expected to decrease significantly in 2026 as adversaries pivot their business models," he told ZDNET. "Rather than relying on the disruptive tactic of locking systems, ransomware will likely prioritize silent data theft for extortion, valuing long-term persistence over immediate chaos. This strategic shift suggests that attackers will focus on maintaining a quiet foothold within networks to exfiltrate sensitive assets undetected, effectively keeping the host operational for prolonged exploitation instead of causing an immediate shutdown."

From Google's point of view, ransomware, data theft, and multifaceted extortion will combine in 2026 to be the most financially disruptive category of global cybercrime. More often than not, such disruptions involve a so-called blast radius that extends outward from the initial attack.

 "This is due not only to the sustained quantity of incidents, but also to the cascading economic fallout that consistently impacts suppliers, customers, and communities beyond the initial victim," noted Google's cybersecurity leaders. "The 2,302 victims listed on data leak sites (DLS) in Q1 2025 represented the highest single-quarter count observed since we began tracking these sites in 2020, confirming the maturity of the cyber extortion ecosystem."

7. How the contagion spreads to industrial control and operations

Even if an organization has the resources to buy its way out of an extortion episode (and provided the threat actors keep to their end of the bargain), the disruption can be devastating to the business, particularly if the contagion spreads beyond IT proper into the industrial control (ICS) or operational technology (OT) estates. 

"In October 2025, Jaguar Land Rover suffered a ransomware attack that forced a global production halt, disrupting supply chains and causing significant operational downtime," noted NCC's Dankaart. "This incident exemplifies how ransomware now targets manufacturing environments where IT and OT are deeply interconnected. Attackers combined encryption with data theft and public extortion tactics, pressuring the company to pay while production lines remained idle. The event highlighted the vulnerability of industrial networks and the cascading impact on suppliers and logistics. In 2026, this trend will continue, targeting ICS controllers and safety systems to maximize operational and reputational damage. Expect campaigns to leverage AI for adaptive payloads and lateral movement across industrial networks."

Also: How a simple link allowed hackers to bypass Copilot's security guardrails - and what Microsoft did about it

The team at Google was more specific about the ICS/OT surface areas, calling out Microsoft's Windows as the soft white underbelly of operational technologies. In 2026, Google expects to see "ransomware operations specifically designed to impact critical enterprise software (such as ERP systems), severely disrupting the supply chain of data essential for OT operations." 

"This vector is effective because compromising the business layer cripples the industrial environment, forcing quick payments," noted the authors of Google's report. "Meanwhile, poor hygiene like insecure remote access will continue to allow common Windows malware to breach OT networks."

8. Imposter employees: The insider threat to your organization

As if there isn't enough surface area across your entire IT estate to look after, let's not let our guard down when it comes to our buildings and physical and virtual infrastructures, an apparently growing area of interest for threat actors. 

"The definition of the 'insider threat' is anticipated to expand beyond rogue employees to include external actors utilizing remote access hardware to bypass endpoint security entirely," said Picus Security's Özarslan. "State-sponsored operatives are expected to increasingly deploy physical devices that plug directly into ports, granting BIOS-level control. This shift means that traditional software-based security agents will be rendered blind to these intrusions, forcing defenders to rely more heavily on physical audits and network-level anomalies to detect this class of threat."

Also: This new cyberattack tricks you into hacking yourself. Here's how to spot it

According to NCC principal security consultant Mark Frost, "Raising awareness of the ease and simplicity of physically breaking into accredited, secure environments without detection will be one of the biggest challenges to tackle. Currently, there are no specific, government-recognized standards and accreditation for the delivery of simulated physical pentesting or the people who are authorized to carry out such tests. As such, we have cheap, short physical pentests as the 'standard.' This gives false positive results. Attackers spend months executing a physical attack, but the industry standard pentest is three days."

The year is also expected to see a continued pattern of corporate infiltration by North Korean (DPRK) operatives. In December 2025, Amazon's chief security officer, Stephen Schmidt, published a post to LinkedIn that detailed the online retail giant's findings when it came to attempts by North Korean IT workers to secure remote IT jobs with the company. 

"Over the past few years, North Korean nationals have been attempting to secure remote IT jobs with companies worldwide, particularly in the US Their objective is typically straightforward: Get hired, get paid, and funnel wages back to fund the regime's weapons programs," wrote Schmidt. "At Amazon, we've stopped more than 1,800 suspected DPRK operatives from joining since April 2024, and we've detected 27% more DPRK-affiliated applications quarter over quarter this year."

These "attacks" are very sophisticated, often involving a combination of identity theft and the impression of being physically located outside North Korea. The US Department of Justice has warned of the existence of domestically located laptop farms that are remotely accessible to DPRK operatives to make it look as though they live in the US during and after the job application process. In July 2025, an Arizona woman pled guilty "for her role in a fraudulent scheme that assisted North Korean IT workers posing as US citizens and residents with obtaining remote IT positions at more than 300 US companies."

Also: Smart home hacking is a serious threat - but here's how experts actually stop it

"We already uncovered coordinated deepfake hiring schemes where AI-generated candidates use synthetic identities and deepfaked video interviews to apply for multiple roles within the same organization," Pindrop's Balasubramaniyan told ZDNET. "Today, 16.8% of candidates, or 1 in 6, are fake. In 2026, enterprises will discover that some new hires who passed interviews, onboarding, and background checks were never real. These long-term 'synthetic employees' can gain access to internal trust, systems, and sensitive data."

"The risk associated with North Korean IT worker activity will continue to extend beyond simple salary earnings," noted the experts at Google. "One objective will be direct financial gain through the abuse of employer network access, specifically targeting and stealing cryptocurrency from crypto-focused organizations. Additionally, workers will continue to leverage their employment access for strategic espionage, as shown by the theft of sensitive data from a defense contractor developing AI technology."

9. Nation-states will destabilize Western interests

The North Korean IT worker scams are not the only cybercrimes in which DPRK operatives are involved. "North Korean cyber threat actors will escalate their highly successful and lucrative operations against cryptocurrency organizations and users," noted Google in its research. "The tactics observed in 2025, which included the largest recorded cryptocurrency heist valued at approximately $1.5 billion, provide a clear indication of their focus on high-yield financially motivated attacks."

Regarding North Korea, Google noted that "campaigns will additionally rely on advanced social engineering, such as luring targets with fake 'hiring assessment' webpages. Similarly, deepfake videos will become more prevalent to build trust and deceive high-value personnel."

Also: 100 leading AI scientists map route to more 'trustworthy, reliable, secure' AI

Outside of threats from North Korea, Google was the only cybersecurity organization surveyed to organize its thoughts on adversarial nation-states by country. Russia, for example, is expected to continue its campaigns to destabilize Western geopolitics through election interference. However, in the course of doing so, it is expected to bring more advanced TTPs to bear. 

"In 2026 and beyond, Russia's cyber operations are expected to undergo a strategic shift, moving past a singular focus on short-term tactical support for the conflict in Ukraine to prioritize long-term global strategic goals," noted Google's experts. "The steady pace of cyber espionage in Europe and North America in 2025, alongside renewed use of novel and creative tactics, techniques and procedures, suggest a transition toward long-term development of advanced cyber capabilities, intelligence collection to support Russia's global political and economic interests."

Google's cybersecurity leaders further noted that "elections will remain a prime target, as seen in activities related to polls in Poland, Germany, Canada, and Moldova in 2025. Furthermore, information operation campaigns will actively seek to manipulate narratives related to news developments, such as promoting claims of alleged Western interference after Romania nullified its 2024 Presidential election."

Also: A new Chinese AI model claims to outperform GPT-5 and Sonnet 4.5 - and it's free

No discussion of nation-state threats would be complete without mention of China. "We anticipate China-nexus cyber espionage tactics, techniques, and procedures will continue to focus on maximizing operational scale and success, with some threat actors also working to minimize opportunities for detection," wrote Google's cybersecurity leaders. 

"China-nexus threat actors will continue to aggressively target edge devices (which typically lack endpoint detection and response solutions), and exploit zero-day vulnerabilities. They will also target third-party providers, since compromising one trusted partner may enable access to many downstream organizations, and abuse of legitimate partner connections makes the resulting malicious access challenging to identify."

10. Credential mismanagement to continue as a leading cybersecurity challenge

At the heart of most intrusions lies a credential or identity breach. For this reason, credential mismanagement will remain a major challenge for organizations -- especially as AI agents, which require their own credentials, begin to sprawl across corporate networks. Crowdstrike's just-announced intention to acquire SGNL is likely in anticipation of an emerging need to tame that forthcoming sprawl, although it's unclear if 2026 is the year that agentic AI will truly take off. Regardless of when agentic AI goes mainstream, companies like Microsoft and Okta are trying to get ahead of the problem by expanding their identity management solutions to manage the identities of AI agents as if they were humans. 

Also: Unchecked AI agents could be disastrous for us all - but OpenID Foundation has a solution

Even so, NCC's Gibbons anticipates trouble at the layer where human and machine identity are managed, saying "third-party services, SaaS dependencies and identity mismanagement (including human and AI accounts) will become the dominant entry vectors, overtaking traditional perimeter breaches." 

Such findings are not surprising given the very recent history of credential management.

Salesforce breaches dominated 2025 cybersecurity news

The biggest credential mismanagement-related breach in 2025 was actually a collection of breaches, most of which tied back to Salesforce. On the surface, it appeared as though some of the world's largest organizations, including tech companies, airlines, and luxury brands, were the ones getting hacked. But more often than not, those breaches were actually connected to a series of sophisticated attacks designed to exfiltrate the customer data that Salesforce hosts for them. 

Although it has never been verified, the hackers claimed to have stolen more than a billion customer records from more than 700 organizations. While many enterprises have publicly disclosed the details of their breaches (with the number of impacted records typically in the millions for each company), many have not. The financial and reputational harm, the full extent of which has yet to be realized, has already been far-reaching. 

Although Salesforce has denied culpability, many of the affected brands have pending lawsuits against the CRM company.

Oauth credentials at the center of Salesforce breaches

In the case of the Salesforce breaches, many of the exfiltrations were enabled by theft of a type of credential known as an Oauth token. End users typically rely on a secret password to access various applications. OAuth tokens work in a similar, secretive way when, in a context commonly described as machine-to-machine integration, applications must authenticate to other applications.

Harkening back to those exfiltrations while referring to a Top 25 Common Weakness Enumeration list for 2025 jointly published by the US Cybersecurity and Infrastructure Security Agency (CISA) and the Homeland Security Systems Engineering and Development Institute (HSSEDI) operated by the MITRE Corporation, AppOmni chief security officer Cory Michal noted that CISA is oddly "underplaying CWE-522 regarding insufficiently protected credentials."

OAuth credentials "become a skeleton key into thousands of downstream SaaS tenants," said Michal. "We're seeing adversaries use those stolen tokens to access CRM and collaboration data without ever touching a user's password, and I'd expect that pattern, and therefore CWE-522's real-world impact to keep growing in 2026."

Also: The best password managers of 2026: Expert tested

Despite CWE-522's glaring omission from that Top 25 list, the Computer Security Resource Center at the US National Institute of Standards and Technology is giving it the priority it deserves, having published a NIST interagency request that seeks expert comments on protecting tokens and assertions from forgery, theft, and misuse. 

"There's no question the success of ShinyHunters/UNC6040 and Drift/UNC6395 has caught the attention of other threat groups," noted AppOmni CTO and co-founder Brian Soby. "They will view those incidents as clear examples of the weaknesses in today's zero trust technologies and will double down on similar attack methods."

"Identity compromise is projected to transition from a preliminary step to a primary objective, with adversaries focusing on logging in rather than hacking in," Picus Security's Özarslan told ZDNET. "Attackers will likely prioritize the silent extraction of credentials from password stores and browsers, bypassing complex encryption mechanisms by abusing legitimate system APIs. This evolution suggests that the perimeter will definitively move to the identity layer, where the theft of valid sessions and tokens allows threat actors to operate with the freedom of legitimate users, making behavioral analysis crucial for detection."

CISOs held more accountable than ever in 2026

Ten years from now, cybersecurity pros will likely look back on 2026 as a tipping point for defenders. As NCC's Brauchler put it, "[AI] is an exciting era of technology, but the threat landscape of AI continues to look like it will get much worse before it gets better."

Brauchler's colleague, NCC Transport Practice Lead Gary Cannon, captured the worsening consequences quite well when he wrote that "breaches are no longer isolated events; they are systemic risks impacting reputation, revenue, and regulatory compliance. Boards and executive committees will increasingly see cyber risk as a top-tier business risk, not just an IT issue."

Also: 9 strategic imperatives every business leader must master to survive and thrive in 2026

As threat actors are able to better scale their attacks and as those attacks begin to land with more frequency while escaping detection and presenting more of a risk to the business, chief information security officers (CISOs) will be saddled with more accountability than ever. 

"2026 will be remembered as the year the security industry made accountability non-negotiable. The CISO's role will evolve into that of a business risk leader," wrote Cannon. "As ransomware and large-scale breaches continue to escalate, cybersecurity will climb even higher on corporate risk registers. This shift will demand clarity, business-aligned communication, and proof that investment translates into resilience. CISOs will gain unprecedented budgets and resources in 2026. However, with greater investment comes greater scrutiny. The expectation will be clear: Deliver measurable resilience."

Cannon also noted that "the evolving accountability landscape for CISOs will surprise many. Historically, breaches were seen as 'experience-building' events for security leaders. By late 2026, that narrative will shift. Breaches tied to poor decisions or underinvestment will carry real consequences, including stalled careers," he said. NCC's Gibbons put it even more succinctly when he said "cyber-resilience will become a competitive differentiator."

Also: I test AI for a living, and these 3 free tools are the ones I use most

"Organizations will demand proactive risk management, measurable outcomes, and transparency," noted Cannon. " Cyber security will become a shared responsibility across the C-suite, with stronger regulatory frameworks and even personal liability for executives in certain jurisdictions."

As CISOs get bigger budgets, they'll need to train existing team members while also recruiting personnel from a relatively limited talent pool. NCC Managed Services Portfolio VP Natalie Walker said, "There is a growing cyber skills shortage, making it more important for organizations to leverage managed service providers for data analysis and prioritization, providing intelligent action, and enabling security teams to focus more on critical mitigating activities, which ultimately reduces their risk appetite and makes the organization more secure."

"The CISO will go from being an IT security manager to a C-suite executive in charge of making sure the organization stays resilient in 2026," Picus's Özarslan told ZDNET. "The traditional 'gatekeeper' approach will no longer be successful. This requires a proactive, predictive, and preemptive approach, anticipating and mitigating threats before they impact the organization. CISOs will also be crucial to AI governance. They will need to collaborate with the CTO and legal teams to make sure that AI systems are safe, ethical, and compliant."

Read Entire Article