Google: Hackers used AI to develop zero-day exploit for web admin tool

52 minutes ago 4

 Hackers used AI to develop zero-day exploit for web admin tool

Researchers at Google Threat Intelligence Group (GTIG) say that a zero-day exploit targeting a popular open-source web administration tool was likely generated using AI.

The exploit could be leveraged to bypass the two-factor authentication (2FA) protection in a popular open-source, web-based system administration tool that remains unnamed.

Although the attack was foiled before the mass exploitation phase, the incident shows that threat actors are relying more on AI assistance for their vulnerability discovery and exploitation efforts.

Based on the structure and content of the Python exploit code, Google has high confidence that the adversary used an AI model to find and weaponize the vulnerability.

"For example, the script contains an abundance of educational docstrings, including a hallucinated CVSS score, and uses a structured, textbook Pythonic format highly characteristic of LLMs training data," GTIG says in a report today.

The large language model (LLM) used for the malicious task remains unclear, but Google rules out the possibility that Gemini was involved in the process.

Additional evidence suggesting the use of LLM tools in the discovery process is the nature of the flaw - a high-level semantic logic bug that AI systems excel at identifying, rather than memory corruption or input sanitization issues typically uncovered through fuzzing or static analysis.

ComparisonSource: Google

Google notified the software developer about the significant threat and timely action to disrupt the attack.

“For the first time, GTIG has identified a threat actor using a zero-day exploit that we believe was developed with AI,” GTIG researchers say.

Apart from this case, Google notes that Chinese and North Korean hackers, such as APT27, APT45, UNC2814, UNC5673, and UNC6201, have been using AI models for vulnerability discovery and exploit development, continuing the trend observed in the February report.

Russia-linked actors were also observed using AI-generated decoy code to obfuscate malware such as CANFAIL and LONGSTREAM.

CANFAIL code comments for the decoy logicCANFAIL code comments for the decoy logic
Source: Google

Google has also highlighted a Russian operation codenamed “Overload,” where social engineering threat actors used AI voice cloning to impersonate real journalists in fake videos promoting the anti-Ukraine narrative.

The PromptSpy backdoor for Android, documented by ESET earlier this year, is also highlighted in Google’s report for its integration with Gemini APIs for autonomous device interaction.

However, Google also found an autonomous agent module named "GeminiAutomationAgent" that uses a hardcoded prompt to enable the malware to interact with the device in an automated way.

According to the researchers, the role of the prompt is to assign a benign persona so it can bypass the LLM's safety features. The goal is to calculate the geometry of the user interface bounds, which PromptSpy could use to interact with the device in multiple ways.

Furthermore, the malware makes use of AI-based capabilities to replay authentication on the device, be it in the form of a lock pattern or a PIN, Google researchers say.

The company is warning that threat actors are now industrializing access to premium AI models using automated account creation, proxy relays, and account-pooling infrastructure.

article image

99% of What Mythos Found Is Still Unpatched.

AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.

At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what's exploitable, proves controls hold, and closes the remediation loop.

Claim Your Spot

Read Entire Article