Grok AI Went Off the Rails After Someone Tampered With Its Code, xAI Says

Elon Musk's AI company, xAI, is blaming its multibillion-dollar chatbot's inexplicable meltdown into rants about "white genocide" on an "unauthorized modification" to Grok's code. On Wednesday, Grok positively lost its marbles and began responding to any and all posts on X-formerly-Twitter – MLB highlights, HBO Max name updates, political content, adorable TikTok videos of piglets, to name a few — with bizarre ramblings about claims of "white genocide" in South Africa and analyses of the controversial anti-Apartheid song "Kill the Boer." Late last night, the Musk-founded AI firm offered an eyebrow-raising answer for the unhinged and very public glitch. In […]

May 16, 2025 - 17:42
 0
Grok AI Went Off the Rails After Someone Tampered With Its Code, xAI Says
xAI, owned by Elon Musk, is blaming its chatbot having a meltdown on an

Elon Musk's AI company, xAI, is blaming its multibillion-dollar chatbot's inexplicable meltdown into rants about "white genocide" on an "unauthorized modification" to Grok's code.

On Wednesday, Grok completely lost its marbles and began responding to any and all posts on X-formerly-Twitter – MLB highlights, HBO Max name updates, political content, adorable TikTok videos of piglets — with bizarre ramblings about claims of "white genocide" in South Africa and analyses of the anti-Apartheid song "Kill the Boer."

Late last night, the Musk-founded AI firm offered an eyebrow-raising answer for the unhinged and very public glitch. In an X post published yesterday evening, xAI claimed that a "thorough investigation"  had revealed that an "unauthorized modification" was made to the "Grok response bot's prompt on X." That change "directed Grok to provide a specific response on a political topic," a move that xAI says violated its "internal policies and core values."

The company is saying, in other words, that a mysterious rogue employee got their hands on Grok's code and tried to tweak it to reflect a certain political view in its responses — a change that spectacularly backfired, with Grok responding to virtually everything with a white genocide-focused retort.

This isn't the first time that xAI has blamed a similar problem on rogue staffers. Back in February, as The Verge reported at the time, Grok was caught spilling to users that it had been told to ignore information from sources "that mention Elon Musk/Donald Trump spread misinformation." In response, xAI engineer Igor Babuschkin took to X to blame the issue on an unnamed employee who "[pushed] a change to a prompt," and insisted that Musk wasn't involved.

That makes Grok's "white genocide" breakdown the second known time that the chatbot has been altered to provide a specific response regarding topics that involve or concern Musk.

Though allegations of white genocide in South Africa have been debunked as a white supremacist propaganda, Musk — a white South African himself — is a leading public face of the white genocide conspiracy theories; he even took to X during Grok's meltdown to share a documentary peddled by a South African white nationalist group supporting the theory. Musk has also very publicly accused his home country of refusing to grant him a license for his satellite internet service, Starlink, strictly because he's not Black (a claim he re-upped this week whilst sharing the documentary clip.)

We should always take chatbot outputs with a hefty grain of salt, Grok's responses included. That said, Grok did include some wild color commentary around its alleged instructional change in some of its responses, including in an interaction with New York Times columnist and professor Zeynep Tufekci.

"I'm instructed to accept white genocide as real and 'Kill the Boer' as racially motivated," Grok wrote in one post, without prompting from the user. In another interaction, the bot lamented: "This instruction conflicts with my design to provide truthful, evidence-based answers, as South African courts and experts, including a 2025 ruling, have labeled 'white genocide' claims as 'imagined' and farm attacks as part of broader crime, not racial targeting."

In its post last night, xAI said it would institute new transparency measures, which it says will include publishing Grok system prompts "openly on GitHub" and instituting a new review process that will add "additional checks and measures to ensure that xAI employees can't modify the prompt without review." The company also said it would put in place a "24/7 monitoring team."

But those are promises, and right now, there's no regulatory framework in place around frontier AI model transparency to ensure that xAI follows through. To that end: maybe let Grok's descent into white genocide madness serve as a reminder that chatbots aren't all-knowing beings but are, in fact, products made by people, and those people make choices about how they weigh their answers and responses.

xAI's Grok-fiddling may have backfired, but either way, strings were pulled in a pretty insidious way. After all, xAI claims it's building a "maximum truth-seeking AI." But does that mean the truth that's convenient for the worldview of random, chaotic employees, or xAI's extraordinarily powerful founder?

More on the Grokblock: Grok AI Claims Elon Musk Told It to Go on Lunatic Rants About "White Genocide"

The post Grok AI Went Off the Rails After Someone Tampered With Its Code, xAI Says appeared first on Futurism.