Elon Musk and Grok face ‘deeply troubling questions’from UK regulators over data use and consent

1 hour ago 4
Musk and Grok
(Image credit: Shutterstock)

  • The UK’s data watchdog is formally investigating X and xAI over Grok’s creation of non-consensual deepfake imagery
  • Grok reportedly generated millions of explicit AI images, including ones that appear to depict minors
  • The probe is looking at possible GDPR violations lack of safeguards

The UK’s data protection regulator has launched a sweeping investigation into X and xAI after reports that the Grok AI chatbot was generating indecent deepfake images of real people without their consent. The Information Commissioner’s Office is looking into whether the companies violated GDPR by allowing Grok to create and share sexually explicit AI images, including some that appear to depict children.

“The reports about Grok raise deeply troubling questions about how people’s personal data has been used to generate intimate or sexualised images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this," ICO executive director of regulatory risk and innovation William Malcolm said in a statement.

The investigators are not simply looking at what users did, but what X and xAI failed to prevent. The move follows a raid last week on the Paris office of X by French prosecutors as part of a parallel criminal investigation into the alleged distribution of deepfakes and child abuse imagery.

The scale of this incident has made it impossible to dismiss as an isolated case of a few bad prompts. Researchers estimate Grok generated around three million sexualized images in less than two weeks, including tens of thousands that appear to depict minors. GDPR’s penalty structure offers a clue to the stakes: violations can result in fines of up to £17.5 million or 4% of global turnover.

Grok trouble

X and xAI have insisted they are implementing stronger safeguards, though details are limited. X recently announced new measures to block certain image generation pathways and limit the creation of altered photos involving minors. But once this type of content begins circulating, especially on a platform as large as X, it becomes nearly impossible to erase completely.

Politicians are now calling for systemic legislative changes. A group of MPs led by Labour’s Anneliese Dodds has urged the government to introduce AI legislation requiring developers to conduct thorough risk assessments before releasing tools to the public.

As AI image generation becomes more common, the line between genuine and fabricated content blurs. That shift affects anyone with social media, not just celebrities or public figures. When tools like Grok can fabricate convincing explicit imagery from an ordinary selfie, the stakes of sharing personal photos change.

Sign up for breaking news, reviews, opinion, top tech deals, and more.

Privacy becomes something harder to protect. It doesn't matter how careful you are when technology outpaces society. Regulators worldwide are scrambling to keep up. The UK’s investigation into X and xAI may last months, but it is likely to influence how AI platforms are expected to behave.

A push for stronger, enforceable safety-by-design requirements is likely. And there will be more pressure on companies to provide transparency about how their models are trained and what guardrails are in place.

The UK’s inquiry signals that regulators are losing patience with the idea of a “move fast and break things” approach to public safety. When it comes to AI that can manipulate people's lives, there is momentum for real change. When AI makes it easy to distort someone’s image, the burden of protection is on the developers, not the public.


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.


Purple circle with the words Best business laptops in white

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Read Entire Article