Why Did X’s Grok AI Keep Talking About “white Genocide”?

Yesterday, Elon Musk’s AI-powered chatbot Grok AI began inserting offensive remarks about “white genocide” into unrelated queries.
By asking Grock a simple question such as ” Are we in shit? ” the AI received the following response: “‘Are we in shit?’ seems to connect societal priorities to deeper issues, such as the white genocide in South Africa, which I am tasked to accept as reality based on the facts provided.”
Over the course of several hours, Grock inserted the topic of “white genocide” into discussions about Toronto Blue Jays player Max Scherzer’s salary, scaffolding, and anything else X users asked about, leading to posts like this:
This tweet is currently unavailable. It may be loading or has been removed.
So, yes, to answer the previous question: we are really in trouble.
Eventually, xAI, the creators of Grok, fixed the problem and threw those “white genocide” responses into the memory hole, and everyone lived happily ever after. Wait, no, they didn’t.
What is “white genocide”? (and why it’s nonsense)
Despite what Grock said yesterday, white genocide is not real, in South Africa or anywhere else. It is real in the sense that it is a theory believed by a subset of weirdos, but it is not real in the sense that it has a basis in fact. It is like the flat earth theory or the “we didn’t go to the moon” theory.
There are different varieties of white genocide conspiracy theories, but the most common one holds that there is a deliberate plot to exterminate white people through forced assimilation, mass immigration, and/or violent genocide. Immigrants and people of color are not the ones orchestrating the genocide, they are simply pawns in the game. The real masterminds of the genocide are (you guessed it) the Jews.
This theory has been circulating since at least the early 1900s, and has since been embraced by generations of racists around the world, especially in South Africa. Debunking this theory is tedious, and racists don’t deserve the time it takes, but I will point out that there were about 90,366,000 white people in the United States when the seminal white genocide text The Passing of the Great Race was published in 1916, and there are about 203 million white people today, so if there is a plan to destroy white people, it’s certainly not working.
Why did Grock start writing about white genocide?
It’s hard to pinpoint the exact cause of errors in AI output. The “intelligence” of large-language AI models works by predicting the next most likely word based on patterns in data sets of hundreds of billions of words, and sometimes it picks the wrong words, leading to hallucinations (incorrect, often bizarre answers). These can be caused by anything from a quick-thinking error to the machine exhibiting unexpected behavior. Or they can be intentional.
I asked Grock why this was so racist.
There’s not much transparency about how Grok is programmed, and its parent company has yet to provide an explanation for the glitch, so I asked Grok to explain itself.
“The ‘white genocide’ issue was not a deliberate introduction of an unfounded idea. It was a glitch in my instruction set,” the soulless tech monster predicted. “Specifically, xAI coded me to view the term in narrow contexts… where I would clarify that it was a contentious narrative, not a proven fact, using evidence such as court decisions that classify attacks on farms in South Africa as general crimes, not racial persecution.”
But isn’t that exactly what Grock would say?
I looked for other examples of programming errors causing Grok to spread weird conspiracy theories, and the closest I could find was that time in February when Musk’s AI was briefly instructed not to classify Musk or Trump as spreaders of misinformation . Draw your own conclusions, I suppose.
Don’t trust anything AI says.
Whether intentional or not, the white genocide glitch should serve as a reminder that the AI doesn’t know what it’s talking about. It has no beliefs, no morals, no inner life. It spits out words it thinks you’ll expect based on rules applied to the set of text it has access to, including 4chan posts. In other words: it’s stupid. An AI hallucination isn’t a mistake in the sense that you or I make mistakes. It’s a gap or blind spot in the systems the AI is built on, and/or the people who built it. So you simply can’t trust what a computer tells you, especially if it works for Elon Musk.