Google has published a list of ways AI is currently being used by threat actors to more efficiently hack you

2 days ago 3
Fallout hacking minigame (Image credit: Bethesda)

As AI continues to grow and make its way into everyday life, the alleged productivity gains do appear to be showing in some places. It just so happens that hacker groups are one of those places, and Google's Threat Intelligence has listed some of the many ways they use it. Welcome to the future.

In its latest report, it says, "In the final quarter of 2025, Google Threat Intelligence Group (GTIG) observed threat actors increasingly integrating artificial intelligence (AI) to accelerate the attack lifecycle, achieving productivity gains in reconnaissance, social engineering, and malware development."

Our latest GTIG AI Threat Tracker report reveals how adversaries are integrating AI into operations.We detail state-sponsored LLM phishing, AI-enabled malware like HONESTCUE, and rising model extraction attacks.Read the report: https://t.co/6GIqxYxNDF pic.twitter.com/2KHXKnhpPqFebruary 12, 2026

One such method for AI use is making hackers seem more reputable in conversation. "Increasingly, threat actors now leverage LLMs to generate hyper-personalized, culturally nuanced lures that can mirror the professional tone of a target organization or local language"

Google has spotted it being used in phishing scams to learn information about potential targets, too. "This activity underscores a shift toward AI-augmented phishing enablement, where the speed and accuracy of LLMs can bypass the manual labor traditionally required for victim profiling."

Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.

This is all before mentioning AI-generated code, with hackers such as APT31 using Gemini to automate analysing vulnerabilities and plans to test said vulnerabilities. It also spotted 'COINBAIT', a phishing kit masquerading as a cryptocurrency, "whose construction was likely accelerated by AI code generation tools."

Though mostly a proof of concept, Google has reportedly spotted a malware that prompts users' AI bots to create code to generate additional malware. This would make tracking down malware on a machine increasingly hard as it continues to 'mutate'.

Google says, "The potential of AI, especially generative AI, is immense. As innovation moves forward, the industry needs security standards for building and deploying AI responsibly."

Just last week, we saw a phishing scam that uses AI to deepfake CEOs of companies, in order to get access to a victim's cryptocurrency. It seems AI is becoming more than just one tool in a hacker's toolbelt, and one has to hope counteragents are getting enough data to counteract it.

Razer Blade 16 gaming laptop

James is a more recent PC gaming convert, often admiring graphics cards, cases, and motherboards from afar. It was not until 2019, after just finishing a degree in law and media, that they decided to throw out the last few years of education, build their PC, and start writing about gaming instead. In that time, he has covered the latest doodads, contraptions, and gismos, and loved every second of it. Hey, it’s better than writing case briefs.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Read Entire Article