
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
- The more powerful AI gets, the more threat actors will harness it to attack.
- Organizations and individuals must match their adversaries' tenacity.
- Experts recommend best practices for avoiding catastrophe.
When was the last time you wondered if that mysterious phone caller who hung up after you answered "hello" made a recording of your voice in a way that could be used against you? The FCC warned us about such scams nearly a decade ago, before artificial intelligence was even on the scene.
Now -- with AI cloning your voice and conversational tone from as little as three seconds of audio -- the stakes are much higher.
Whether used for legitimate or nefarious purposes, AI's chief selling proposition has been its knack for speed and scale. In the hands of a threat actor, a lot of damage can be done in the blink of an eye. And it's getting worse. Your only meaningful response is to match your adversaries' tenacity. In this article, we'll review the growing threats and best practices you can use to protect yourself and your organization.
Threat actors are rapidly adapting AI to their TTPs
In its January 2025 report on the Adversarial Misuse of Generative AI, Google's Threat Intelligence Group (GTIG) reported that threat actor reliance on Google Gemini was mostly contained to run-of-the-mill productivity use cases.
"Rather than engineering tailored prompts, threat actors used more basic measures or publicly available jailbreak prompts in unsuccessful attempts to bypass Gemini's safety controls," said the post's authors. "Threat actors are experimenting with Gemini to enable their operations, finding productivity gains but not yet developing novel capabilities. At present, they primarily use AI for research, troubleshooting code, and creating and localizing content."
Also: Will AI make cybersecurity obsolete or is Silicon Valley confabulating again?
In a November 2025 post, GTIG noted significant advancements in the AI-related tactics, techniques, and procedures (TTPs) used by threat actors: "Adversaries are no longer leveraging artificial intelligence just for productivity gains; they are deploying novel AI-enabled malware in active operations. This marks a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution."
Last year, Anthropic published a similar post on detecting and preventing malicious use of its Claude LLM. "The most novel case of misuse detected was a professional 'influence-as-a-service' operation showcasing a distinct evolution in how certain actors are leveraging LLMs for influence operation campaigns," wrote the report's authors. "What is especially novel is that this operation used Claude not just for content generation, but also to decide when social media bot accounts would comment, like, or re-share posts from authentic social media users."
Echoing Google's concerns about AI-assisted malware development, Anthropic's report added, "We have also observed cases of credential stuffing operations, recruitment fraud campaigns, and a novice actor using AI to enhance their technical capabilities for malware generation beyond their skill level."
The unstoppable advancement of deepfake evolution
Perhaps the most concerning of all the evolving threats at this very moment is the increasingly convincing nature of deepfake videos, images, and audio, and the opportunities they create for mistaken identities. However, as evidenced by ByteDance's February 2025 Seedance 2.0 launch, an incredibly convincing scene of Tom Cruise fighting Brad Pitt that was made with Seedance (and some very swift backlash from the entertainment industry), the very latest video generation models are making it increasingly more difficult, if not impossible, to spot deepfakes.
This was a 2 line prompt in seedance 2. If the hollywood is cooked guys are right maybe the hollywood is cooked guys are cooked too idk. pic.twitter.com/dNTyLUIwAV
— Ruairi Robinson (@RuairiRobinson) February 11, 2026According to LastPass' director of AI innovation Alex Cox, Seedance represents a concerning inflection point in the overall evolution of deepfake video production tools and their potential for wrongdoing. "AI can produce content that is almost indistinguishable, if not completely indistinguishable, from real human activity," he told ZDNET. "We've gotten to the point of multimodal AI capabilities that most forms of online human interaction can be believably faked by AI. Written interaction is still the absolute strong point. But video and audio are rapidly approaching similar levels."
Also: Why enterprise AI agents could become the ultimate insider threat
Cox predicts that AI-powered video and audio tools will evolve to the point where we can be pretty easily tricked into believing we're dealing with an authentic person -- even in a video meeting -- when, in reality, it's a deepfake. "[Then], add the concept of virtual avatars and real-world translation. Imagine an attacker researches common public figures in your organization and creates a virtual avatar that not only looks and sounds like the public figure, but presents from places that they commonly present from in public footage," said Cox. "The language and behavioral indicators (tics, sayings, etc.) they use could also be modeled, so the attacker could essentially 'become them' in a meeting."
While the current state of the tools might not be equipped to handle the real-time nature of meetings, Cox thinks it won't be long before they are. "Right now, AI tech can't do this," he said. "There is still latency and artifacts involved that give people that uncanny 'valley' feeling. But we are rapidly approaching parity in this area."
Text and still images: already a lost cause?
Meanwhile, when it comes to static mediums like text and still images, and the opportunity to mislead users and other audiences, we've already been there. Two years ago, Futurism.com reported how Sports Illustrated and TheStreet were publishing articles by fake AI-generated authors. The incident caused reputational damage to the two media brands and ultimately resulted in the dismissal of two top executives at the brands' parent company.
While these weren't AI-assisted crimes committed by cybercriminals, they speak to the increasing frequency of purposeful deceit and the greater likelihood that many of the online identities we encounter could be inauthentic, supporting a range of highly questionable motives.
Also: Rolling out AI? 5 security tactics your business can't get wrong - and why
Once we've been successfully baited by deepfake imagery or identities, the question is: What's the damage? Will a voice clone trick you into sending money to a scammer? Will you get phished for the credentials to your most sensitive accounts? Will one of your devices end up with malware that eventually finds its way into your employer's systems? Will misconfigured defensive mechanisms enable malicious AI agents to roam freely behind your organization's firewall?
6 ways to defend yourself - starting now
Don't wait for an AI-enabled attack before taking action. Experts warn that it's time for individuals, IT professionals, and organizations alike to evolve their best practices and vigilance accordingly in order to reduce the likelihood of a catastrophic event at the hands of AI-equipped cybercriminals or ill-intentioned nation-states. Here, in no particular order, are six ways to improve your odds.
1. Stay up to date on evolving threats
Stay fanatically educated on AI safety and security, and aware of the evolving threats. Get to know the risks and evolving threat landscape.Pay close attention to the most important sources of information, such as the threat intelligence and AI safety groups at frontier AI developers Anthropic, Google DeepMind, and OpenAI. Set up your feeds to become aware of new information as it is made available by various reputable cybersecurity and threat intelligence sources, including GTIG, AppOmni, the US Cybersecurity and Infrastructure Security Agency (CISA), the OWASP LLM Top Ten, the AI incident Database, and the emerging AI-related techniques as they appear in the Mitre ATT&CK Matrix, like adversary acquisition of AI capabilities (as well as the non-AI related techniques).
2. Move to non-phishable credentials
Be as aggressive about moving to non-phishable passwordless credentials, including passkeys and number-matching-based multifactor authentication. The majority of successful attacks start with some form of phishing or vishing (the voice-based equivalent of phishing). With the help of AI and voice cloning, phishing and vishing attacks will become more convincing. The sooner you and your company make the move to non-phishable credentials, the better. For example, don't wait to opt in to non-phishable credentials for those online services that support them. When it comes to your organization's identity and credential management, insist that it eliminate phishable credentials sooner rather than later. Passwords are easily phished (or vished). One-time passwords (OTPs) of the sort emailed, sent via SMS, or generated by your authentication app are problematic as well.
3. Identify all your agents
Before making the move to agentic AI, ensure you have a way to identify every legitimate agent within your control or your organization's infrastructure. Vendors like Microsoft, Okta, and Ping Identity offer identity and access management (IAM) solutions that manage the identities of agents on your network, much as you manage human identities. Although agentic AI is likely to yield enormous productivity gains, legitimate agents are a target-rich environment for malicious agents (and there's no question that threat actors will rely on such "shadow agents" to do their bidding). Should one of your legitimate agents become compromised, time will be of the essence for tracking it down and deprovisioning it. But if those agents are roaming free without a shred of management, good luck containing the damage from an agentic attack.
4. Embrace zero trust
Employ a zero-trust strategy wherever possible. Yes, certain people, organizations (e.g., partners), processes, and even AI agents will need access to various resources and systems of record in order to execute their responsibilities. But always start them out with a few or even no privileges to see what breaks. It's a jungle out there. Danger lurks under every rock and behind every tree. A little bit of friction can help. Minimally escalate privileges where that friction presents serious obstacles to the business. Trust should be earned. Not the default.
5. Know your OAuth tokens
Get smarter about your OAuth token exposure: You may not know it, but you've likely issued one or more OAuth tokens that allow one service to access another on your behalf. For example, if your Spotify account is set up to automatically post to Instagram about the songs you're listening to, you've essentially instructed Instagram to grant Spotify an OAuth token. Such delegations of authority are expected to multiply by several orders of magnitude once agentic AI takes off, and all those agents will need access to a wide range of services. But here's the question: Do you know all the OAuth tokens you've issued? And for those that you do, do you know how to revoke them? For a long time, we had the luxury of not caring too much about this potential exposure. But those days are now officially over as OAuth tokens are some of the most coveted credentials that a threat actor can acquire.
6. Stay skeptical
Become a skeptic if you aren't one already. As it becomes increasingly difficult to tell the difference between legitimate content and deepfakes, now is the time to become less trusting of everything you see or hear online. Just last week, within hours of former US Secretary of State Hillary Clinton testifying that she never met Jeffrey Epstein, deepfake photos portraying her in his presence went viral across social media in an attempt to discredit her testimony. As tools for producing deepfake imagery and audio continue to evolve, threat actors will begin to rely on them in unanticipated ways. For example, in a highly targeted attack, you might receive a video or voice message from your boss or CEO to take an action you otherwise wouldn't. Err on the side of caution and always double-check the authenticity of such messages.
Know your enemy
When considering these and other ways to uplevel your so-called "sec-op" -- your operational security -- put yourself in the shoes of your adversaries, given what AI can do now and in the not-too-distant future. If you were your adversary, you would likely exhaust every AI option that exists to achieve your objective. And the better AI gets at helping you to achieve that malicious objective, the more defenseless your victims will seem. It's like bringing a gun to a knife fight. If you know that in advance -- which you now do -- would you leave the outcome to chance or would you rise to the challenge? Do you even have a choice?
Also: Why encrypted backups may fail in an AI-driven ransomware era
Oh, and about those mysterious callers who wait for you to say "yes" or "hello" and then hang up, perhaps you should consider not answering calls from unknown numbers (or, at a minimum, waiting for the caller to speak first). It's a bitter pill to swallow (and admittedly, impractical in certain scenarios). But then again, keep the idea of zero trust in mind. If the call is that important (and legitimate), they'll figure out how to get in touch.









English (US) ·