Cyber-attacks have more than doubled worldwide in just four years, from 818 per organization in 2021 to almost 2,000 per organization last year, according to the World Economic Forum (WEF). It’s a staggering statistic.
And small businesses are particularly exposed, now seven times more likely to report insufficient cyber-resilience than they were in 2022. Whether we like it or not, artificial intelligence (AI) tools have had a big role to play here, not just with the increasing volume of attacks but also the sophistication.
General Manager of security products of Alibaba Cloud Intelligence.
Risks are now emerging at every layer of the AI stack, from prompt injection and data leakage to AI-powered bot scraping and deepfakes.
As a recent industry report reveals, attackers are now using large language models (LLMs) to craft convincing phishing campaigns, write polymorphic malware, and automate social-engineering at scale.
The result is a threat environment that learns, adapts, and scales faster than human analysts can respond.
What lies beneath the layers?
AI systems are built in layers, and each one brings its own weak spots. At the environment layer, which provides computing, networking and storage, the risks resemble those in traditional IT but the scale and complexity of AI workloads make attacks harder to detect.
The model layer is where manipulation starts. Prompt injection, non-compliant content generation and data exfiltration are now among the top threats, as highlighted in the OWASP 2025 Top 10 for LLM Applications.
The context layer, home to retrieval-augmented generation (RAG) databases and memory stores, has become a prime target for data theft. Meanwhile, at the tools and application layers, over-privileged APIs and compromised AI agents can give attackers the keys to entire workflows.
In other words, the attack surface is expanding in every direction, and with it, the need for smarter defenses. The answer isn’t to abandon AI but to use AI to secure AI. So a comprehensive security framework needs to span the full AI lifecycle, protecting three essential layers: model infrastructure, the model itself, and AI applications.
When security is embedded into business workflows rather than bolted on afterward, organizations gain efficient, low-latency protection without sacrificing convenience or performance.
Security teams are already deploying intelligent guardrails that scan prompts for malicious intent, detect anomalous API behavior and watermark generated content for traceability.
The latest generation of AI-driven security operations applies multi-agent models to analyze billions of daily events, flag emerging risks in real time and automate first-response actions.
According to PwC’s Digital Trust Insights 2026 survey, AI now tops the list of investment priorities for Chief Information Security Officers (CISOs) worldwide, a sign that enterprises are finally treating cyber resilience as a learning system, not a static checklist.
Threats that lurk in the shadows
Yet even as enterprises strengthen their defenses, a new and largely self-inflicted risk is taking shape inside their own networks. It’s called shadow AI. In most organizations, employees are using generative tools to summarize reports, write code or analyze customers, often without official approval or data-governance controls.
According to one report from Netskope, around 90 percent of enterprises now use GenAI applications, and more than 70 per cent of those tools fall under shadow IT. Every unmonitored prompt or unvetted plug-in becomes a potential leak of sensitive data.
Internal analysis across the industry suggests that nearly 45 percent of AI-related network traffic contains sensitive information, from intellectual property to customer records. In parallel, AI-powered bots are multiplying at speed. Within six months, bot traffic linked to data scraping and automated requests has quadrupled.
While AI promises smarter, faster operations, it’s also consuming ever-greater volumes of confidential data, creating more to defend and more to lose.
A safety-belt for AI
Governments and regulators are beginning to recognize the scale of the challenge. Many AI governance rules all point to a future where organizations will be expected to demonstrate not only compliance, but continuous visibility over their AI systems.
Security postures will need to account for model training, data provenance, and the behavior of autonomous agents, not just network traffic or access logs.
For many, that means embedding security directly into the development pipeline, adopting zero-trust architectures, and treating AI models as living assets that require constant monitoring.
Looking ahead, the battle lines are already being redrawn. The next phase of cybersecurity will depend on a dual engine - one that protects AI systems while also using AI to detect and neutralize threats.
As machine-learning models evolve, so too must the defenses that surround them. Static rules and manual responses can’t keep pace with attackers who automate creativity and exploit speed. What’s needed is an ecosystem that learns as fast as it defends.
That shift is already underway. Multi-agent security platforms now coordinate detection, triage and recovery across billions of daily events.
Lightweight, domain-specific models filter out the noise, while larger reasoning models identify previously unseen attack patterns. It’s an intelligence pipeline that mirrors the adversaries, only this one’s built for defense.
The application of intelligence
The future of digital security will hinge on collaboration between human insight and machine intuition. In practical terms, that means re-training the workforce, as much as re-architecting the infrastructure.
Analysts who can interpret AI outputs, data scientists who understand risk, and policymakers who build trust through transparency are very much needed. The long game is about confidence, not just resilience. Confidence that the systems powering modern life are learning to protect themselves.
Because ultimately, AI isn’t the villain of this story. The same algorithms that make attacks more potent can also make protection more precise. The question for business leaders everywhere is whether they’ll invest fast enough to let intelligence, not inertia, define the next chapter of cybersecurity.
We've featured the best endpoint protection software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro







English (US) ·