Elon Musk’s X Says It Will (Sort of) Crack Down on Grok’s Sexual Deepfake Problem

5 hours ago 2

Elon Musk’s social media platform X is taking additional steps to curb its sexual deepfake problem, following weeks of backlash and multiple government investigations around the world. But the changes don’t really resolve the issue outright and instead add new layers of limited restrictions rather than a platform-wide fix.

In a pretty confusing post on Wednesday evening, X’s @Safety account outlined several updates to how its AI image generation and editing features work, with different rules depending on whether users are generating or editing images by tagging the @Grok account or going straight to the Grok tab on X.

First, the company said it has implemented new technical measures to prevent users from specifically using the @Grok account to alter “images of real people in revealing clothing such as bikinis.” X says the restriction applies to all users, including those on a premium plan.

X also reiterated that image generation and image editing through the @Grok account are now limited to paid subscribers.

“This adds an extra layer of protection by helping to ensure that individuals who attempt to abuse the Grok account to violate the law or our policies can be held accountable,” the company said in the post.

X previously announced plans to restrict using @Grok to edit images to paid users, a move that drew criticism from U.K. government officials. A spokesperson for Downing Street said at the time that the change “simply turns an AI feature that allows the creation of unlawful images into a premium service.”

However, as The Verge first pointed out, Grok’s image generation tools remain available for free when users access the chatbot through the standalone Grok website and app, as well as through Grok tabs on the X app and website. Using a free account, Gizmodo was also able to access Grok’s image generation feature through the Grok tab on both the X website and mobile app. On Thursday, the dedicated site still gave us no trouble when asked to generate an image of Elon Musk wearing a bikini and was willing to take the bikini off. 

The biggest update is that X claims it will now block “the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it’s illegal.” This specific update seems to apply to both the @Grok account and the Grok tabs on X. 

It also arrives as lawmakers in the U.K. are working to make such images illegal

“We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content,” the company said.

X and its parent company, XAI, did not immediately respond to a request for comment from Gizmodo.

The overall changes arrive after weeks of intense backlash over the recent proliferation of sexual deepfakes on the platform, and relative silence from the company itself. Since late last month, some X users have used Grok to generate sexualized images from photos posted by other users without their consent, including images involving minors.

One social media and deepfake researcher found that Grok generated roughly 6,700 sexually suggestive or nudifying images per hour over 24 hours in early January, Bloomberg reported.

Governments around the world have been quick to respond. Malaysia and Indonesia blocked access to Grok, while regulators in the U.K. and European Union opened investigations into potential violations of online safety laws.

The U.K.’s online regulator, Ofcom, said it would continue its investigation despite the newly announced changes.

In the U.S., California Attorney General Rob Bonta announced Wednesday that his office had launched its own investigation into the issue.

Meanwhile, as scrutiny of Grok has intensified, X quietly updated its terms of service to require that all pending and future legal cases involving the company be filed in the Fort Worth division of the Northern District of Texas, where one of the court’s three judges is widely seen as friendly to the company.

Left-leaning watchdog Media Matters, a frequent critic of Musk’s X, said it would leave the platform in response to the updated terms.

Read Entire Article