AI Scammers Are Creating More Believable Schemes in 2025. Here's How to Stay Safe

4 days ago 4

Artificial intelligence helps us get information faster, but it's also making it easier for cybercriminals to get a hold of your personal data. If you've been targeted by any type of cybercrime, scam or fraud attempt in the last 12 months, chances are AI played a role in it. 

AI-generated content contributed to more than $12 billion in fraud losses in 2023, according to a Deloitte digital fraud study. That number could triple to more than $40 billion in the US by 2027.

AI Atlas art badge tag

Artificial intelligence has already been woven into our everyday lives. Need a recipe? Ask ChatGPT. Used a Google search? You were probably served an AI summary to your question. Even financial apps are starting to leverage AI to help you stick to your budget or find ways to save.

With some of the most advanced AI technologies in the world now in the hands of cybercriminals, we're witnessing a completely new world of digital fraud... a world we all need to be better prepared for.


Protect your data and keep your identity safe with Aura.

CNET's pick for the best identity theft protection

AI helps hackers push their limits

Cybercriminals have only so many hours in their day, and only so many people they can hire to help carry out their schemes. AI helps to solve both of those problems.

Now, a handful of coded instructions is often all it takes to create a global phishing attack that can be translated into multiple languages and eliminate many telltale giveaways that the message you're reading is a scam. AI can fix bad grammar, correct spelling mistakes and re-write awkward greetings to make phishing messages seem more legitimate.

AI can also help cybercriminals better orchestrate phishing attacks on a specific industry or company, or around a specific event, like a conference, a trade show, or a national holiday.

Researchers at the University of Illinois Urbana-Champaign recently used voice-enabled AI bots to pull off some of the most common scams reported to the federal government before safely returning money back to victims.

In some of the scams the bots not only had a success rate of more than 60%, but they were also able to pull off the scam in a matter of seconds. 

How fraudsters use AI to steal from you

AI helps criminals sift through trillions of data points more quickly, whereas before they had a harder time working through troves of data (think billions upon billions of personal records) stolen in data breaches or purchased on the dark web. 

Scammers can now use AI to decipher patterns in data and other valuable information in large data sets they can exploit, not to mention help orchestrate attacks. AI is also helping to bolster other forms of fraud.

Synthetic identity fraud

Synthetic identity theft involves stealing a Social Security number -- usually from a child, the elderly or the homeless -- and combining it with other stolen or falsified information such as names and birthdates to create a new, false identity.

Hackers then use this falsified identity to apply for credit, leaving the SSN's original owner with the bill.

AI helps facilitate this popular form of fraud by making it much easier to create highly realistic forged identity documents and synthetic imagery that mimics real faces and can bypass biometric verification systems like those found on an iPhone.

Deepfake scams 

An AI-assisted deep fake scam occurs every 5 minutes in 2024, according to an estimate from security firm, Entrust.

There are countless stories of fraudsters using AI to successfully scam businesses and everyday people out of millions of dollars. Bad actors use very realistic but completely fake videos and voices of people victims know that can fool even the most cautious among us.

Less than a year ago, an employee at Arup, a British design and engineering company, was tricked into transferring $25 million to scammers who used a deepfake video impersonating a CFO.

AI isn't just cloning voices and faces though -- it's also capable of copying human personalities, according to a recent study from researchers at Stanford University and Google DeepMind.

With little information about their subjects, the artificial personalities were able to mimic political beliefs, personality traits and likely responses to questions in an effort to fool victims, the study found.

These results -- coupled with advancements in deepfake video and voice cloning already in use by cybercriminals -- could make it even more difficult to tell if the person you're speaking to online or on the phone is real or an AI doppelganger.

AI could copy your important documents

Despite the world's reliance on technology, physical documentation is still predominantly used to verify your identity.

AI has become skilled at creating believable versions of passports, driver's licenses, birth certificates and more, leading businesses and governments scrambling to find better ways to confirm identities in the future.

How to protect yourself against AI-assisted scams

Following the tips to protect yourself from a human scam can also help protect you from AI-assisted scams. That means being observant, protecting your bank accounts with multiple layers of security, embracing multifactor authentication, freezing and monitoring your credit report and enrolling in identity theft protection.

Here are other tips to stay safe:

  • As AI becomes more potent and widely used, it's even more important to critically examine what we see.
  • Always verify any correspondence you receive with the company to confirm its authenticity.
  • Make sure anything you see online is real before mistakenly spreading misinformation. 
  • As an extra layer of defense against phishing attacks, use a hardware security key, like Yubikey or Google Titan. These keys cost as little as $30.
  • If you're not already using a password manager, it may be time. 1Password, DashLane and LastPass are among the most popular and will help you create unique passwords for every online account you create.
  • Watch for deepfake clues. Fake voices can sound "flatter" or monotone, lacking the emotion of a typical human conversation. With deepfake videos, watch for unusual eye, mouth or lip movements, facial distortions and pixelation.

AI-assisted scams will only become more convincing as the technology advances. Staying aware of common scam tactics and using your common sense and caution remain your best defense against these attempts.


Protect your data and keep your identity safe with Aura.

CNET's pick for the best identity theft protection service

The editorial content on this page is based solely on objective, independent assessments by our writers and is not influenced by advertising or partnerships. It has not been provided or commissioned by any third party. However, we may receive compensation when you click on links to products or services offered by our partners.

Read Entire Article