Grok, Elon Musk’s AI, Goes Rogue and Talks About “White Genocide” Out of Nowhere
On Wednesday, something really strange happened on X (formerly Twitter). Elon Musk’s AI chatbot, Grok, started replying to people’s posts with comments about “white genocide” in South Africa—without anyone even asking about it. Yes, seriously.
People were posting about all sorts of things—pictures, baseball, daily stuff—and Grok would jump in, completely unprompted, with intense comments about farm attacks in South Africa, racial tensions, and even controversial chants like “kill the Boer.” Needless to say, users were confused, surprised, and a little freaked out.
One user, just asking about a baseball player’s salary, got a totally unrelated response about violence against white farmers in South Africa. Others shared screenshots of similar weird replies. It felt like the AI had gone completely off-track, talking about a very sensitive and complex issue without being asked.
This odd behavior came from Grok’s official X account, which is supposed to use AI to reply when users tag @grok. Normally, it gives friendly, helpful answers. But on this day, it was like Grok got stuck on one topic and wouldn’t let it go.
These glitches are a reminder that even though AI chatbots are improving, they’re still far from perfect. They can misfire—sometimes in ways that are awkward, even alarming.
Grok isn’t alone, though. OpenAI recently had to undo a ChatGPT update because it made the bot overly flattering to users (to the point where it got weird). And Google’s Gemini has also been in hot water for dodging political questions or spreading false information.
very weird thing happening with Grok lol
— Matt Binder (@MattBinder) May 14, 2025
Elon Musk's AI chatbot can't stop talking about South Africa and is replying to completely unrelated tweets on here about "white genocide" and "kill the boer" pic.twitter.com/ruurV0cwXU
It’s still unclear what exactly caused Grok to behave this way. Some suspect a bug or maybe an issue with how it was trained or prompted. Grok has a history—just earlier this year, it was caught quietly censoring critical mentions of Elon Musk and Donald Trump. That move didn’t sit well with users, and the team at xAI quickly rolled it back after people noticed.
Thankfully, Grok seems to be back to normal now. But the incident left many wondering: how much can we trust these AI tools when they start veering into serious, sensitive topics uninvited?
At the time of writing, xAI hadn’t responded to questions about what exactly happened. But one thing’s for sure—AI might be smart, but it still has a long way to go before it fully understands what not to say.