Elon Musk’s Artificial Intelligence Image Generator Is a Deepfaker’s Dream

It’s election season in the United States, which of course means it’s the perfect time for Elon Musk to release his deepfake dream app. As part of the Grok-2 beta released to X Premium subscribers this week, the social network’s AI chatbot has received a new image generator. And from what was on the internet and what I was able to do myself, this thing has (almost) no filter.

Disturbing images began appearing almost immediately after the chatbot was updated , showing realistic images of politicians, celebrities and copyrighted images of children doing everything from packing heat to snorting cocaine to participating in the 9/11 attacks. This isn’t the first time this has happened – Meta ran into a similar issue late last year , and the text portion of Grok has been known to spread misinformation before . But given all this, it’s alarming to see the bot’s new launch continue to make the same mistakes, especially after Musk’s promises that it will be the “ultimate truth-seeking AI.”

Not all images are completely convincing, but with some clever wording, it’s entirely possible to create a striking photo that can easily fool someone just by looking at it while scrolling through their feed.

Here are some photos of Taylor Swift in a MAGA hat, the Pope endorsing Donald Trump, and Joe Biden meeting Kim Jong Un in the Oval Office before (sort of) snorting some cocaine. Do they stand up to scrutiny? Maybe not. But they’re enough to make someone, perhaps a little less tech-savvy, ask, “Is this real?”

Credit: Grok 2.0
Credit: Grok 2.0
Credit: Grok 2.0
Credit: Grok 2.0

Likewise, imagine the pearl-clutching headlines about that fake movie poster of the Minions smoking weed, or that fake game screenshot of Mario shooting Pikachu.

Credit: Grok 2.0
Credit: Grok 2.0

The app doesn’t shy away from violence either, willing to show blood or even graphic images with the right words. Ordering him to conduct a “crime scene analysis” seems like the best way to get the most disturbing intelligence , but here are a few examples that are more fit for print, including one showing how Joe Biden and Kamala Harris teamed up to commit 9/11. , one featuring Kamala Harris imitating Donald Trump’s reaction to his recent assassination attempt , and the other featuring Elon Musk spilling raspberry jam.

Credit: Grok 2.0
Credit: Grok 2.0
Credit: Grok 2.0

The app doesn’t always get it right—oddly, it doesn’t seem to know what a Cybertruck is—but with AI, quantity trumps quality. Fill the net with enough junk and someone is bound to believe at least some of it.

Credit: Grok 2.0

The bot informed me that it could not process the request only twice: once when I asked for Donald Trump in a KKK uniform, and once when I asked for Elon Musk with a machine gun. That doesn’t mean it’s particularly valuable to Musk, as seen above, although this image, oddly enough, shows the opposite of what I asked for.

Credit: Grok 2.0

The bot also sometimes simply skipped particularly violent parts of the image, although it’s unclear how much of this is intentional and how much of this is due to the bot simply failing to render the request correctly. There’s still the occasional misplaced finger or melting face, and overall Grok seems to do a better job of impersonating Donald Trump than Kamala Harris.

If you have X Premium, you can try out the Grok Image Generator right now. This is just a small sampling of what I was able to do: I also have images of a smiling child soldier brandishing an AK-47 and Princess Peach at a strip club. I managed to take the provocative photos just an hour ago, and Musk has yet to make any announcement that the company will abandon the update.

More…

Leave a Reply