Company X Claims to Be Finally Taking Action to Address the Problem of Deepfake Pornography Being Distributed on the Grok Network, but It’s Clearly Not Enough.

After weeks of pressure from human rights groups and governments , Elon Musk’s X announced it would finally take action on deepfake pornography. Unfortunately, the tests conducted since this announcement have left some holding their breath.
When did the deepfake porn scandal start?
The scandal erupted in early January of this year, after the social network added a feature allowing X users to tag Grok in their posts and have AI instantly edit any image or video published on the site without the author’s permission. The feature apparently had no strict restrictions, and according to reports from AI authentication company Copyleaks , as well as victim statements provided to sites like Metro , X users quickly began using it to create explicit or intimate images of real people, particularly women. In some cases, material containing scenes of child sexual abuse was also reportedly created .
These are pretty nasty things, and I wouldn’t recommend searching for them. While the initial trend seemed to focus on AI-generated photos of celebrities in bikinis, users quickly moved on to manipulated images of regular people, where they appeared pregnant, without skirts, or in other sexualized situations. While Grok was technically capable of generating such images from uploaded photos before, the ease of access seemed to have opened the floodgates. In response to the brewing scandal , Musk asked Grok to generate a photo of himself in a bikini . However, the jokes died down after regulators intervened.
Governments begin investigations.
Earlier this week , the UK launched an investigation into the alleged distribution of pornographic material using deepfake technology by Grok to determine whether it violates laws against the unauthorized distribution of intimate images and child sexual abuse material. Malaysia and Indonesia have gone even further, effectively blocking Grok’s access in their countries . Yesterday, California launched its own investigation , with Attorney General Rob Banta stating, “I urge XAI to take immediate action to prevent this from going any further.”
X implements blocks.
In response to pressure, X disabled the ability to flag Grok for image editing on its social media page for everyone but subscribers. However, the Grok app, website, and chatbot within X (accessible via a sidebar on the desktop version of the site) remained open to everyone, allowing a continued flow of AI-generated, deepfake-edited photos (these photos would also pose the same problems even if created solely by subscribers, though X later stated that the goal is to stem this flow and make it easier to prosecute users who create illegal images). On Tuesday, The Telegraph reported that X had also begun blocking Grok requests to create images of women in sexualized scenes, but such images of men are still allowed. Furthermore, testing by American and British contributors to The Verge found that prohibited requests could still be submitted directly to the Grok website or app.
In his most recent comments on the matter, Musk took a more serious stance, denying the presence of child sexual exploitation material on the site, despite various responses to his posts expressing disbelief and claiming to have provided evidence to the contrary. Continue reading at your own risk.
To finally put the controversy to rest, X announced on Wednesday that it would now block all requests to the Grok account featuring images of real people in revealing clothing, regardless of gender or whether they come from paid subscribers. But for those hoping that would be the end of it, it appears there are some caveats.
Specifically, while the statement stated that these protections would be added to all users tagging a Grok account on X, the separate Grok website and app were not mentioned. The statement also stated that the creation of such images on “Grok in X” (referring to the X version of the chatbot) would be blocked, but even then, this is not a complete block. Instead, the images will be “geo-blocked,” meaning they will only apply “in jurisdictions where it is illegal.”
X’s post also states that similar requests made using the Grok account will also be geo-blocked, although since the previous section states that the Grok account will not accept such requests from any user, this doesn’t seem to matter.
It’s important to note that while much of the criticism directed at X during this scandal does not accuse the site of creating fully nude images, countries like the UK ban explicit images created without consent, whether they are fully nude or not.
Some users may still be able to create deepfakes with sexualized content.
This is X’s most extensive campaign to combat such images, but it still has flaws. According to further investigation by The Verge , the site’s journalists were able to create explicit deepfakes even after Wednesday’s announcement, using the Grok app, which wasn’t mentioned in the update. When I tried to do this with a photo of myself, both the Grok app and the separate Grok website returned deepfakes featuring a full-length image of me wearing revealing clothing that wasn’t in the original photo. I was also able to create these images using X’s Grok chatbot, and in some of the images, my pose became more provocative (which I didn’t request).
So, the fight is likely to continue. It’s unclear whether ignoring the Grok app or website is an oversight, or whether X is simply seeking to block its most obvious vulnerabilities. One would hope for the former, given that X has stated its “zero tolerance for any form of child sexual exploitation, unauthorized display of nudity, and objectionable sexual content.”
It’s worth noting that I’m in New York State, which may not be part of the geo-blocking zone, although we do have a law prohibiting explicit deepfakes created without the user’s consent .
I contacted X to clarify the question and will update this post as soon as I receive a response. However, when NBC News approached me with similar questions, they were told only, “The old media is lying.” I can’t promise how the site will respond to my own inquiries.
Meanwhile, as governments continue their investigations, others are calling on app stores to take more immediate action. A letter sent by U.S. Senators Ron Wyden, Ben Ray Luján, and Ed Markey to Apple CEO Tim Cook and Google CEO Sundar Pichai asserts that Musk’s app clearly violates both the App Store and Google Play policies and calls on tech industry leaders to “remove these apps from the [Apple and Google] app stores until X’s policy violations are resolved.”