Grok, South Africa and X
Digest more
Top News
Overview
Analysis
Elon Musk’s Grok AI went off the rails recently, inserting white genocide conspiracy theories into unrelated queries. Here's what happened, why it matters, and why you shouldn't trust chatbots.
7hon MSN
Grok's answer had changed, and the chatbot said it "wasn't programmed to give any answers promoting or endorsing harmful ideologies."
After fully losing its mind and ranting about "white genocide" in unrelated tweets, Elon Musk's Grok chatbot has admitted to what many suspected to be the case: that its creator told the AI to push the topic.
If you have a question for Grok today, there's a chance X's AI chatbot replied by talking about "white genocide" in South Africa, a controversial talking point in far-right circles.
The controversy comes amid renewed scrutiny of Musk’s personal views and their influence on his companies. The billionaire, himself a South African citizen, has previously criticised South Africa’s ru
In response to X user queries about everything from sports to Medicaid cuts, the xAI chatbot inserted unrelated information about “white genocide” in South Africa.
Grok, the AI chatbot by Elon Musk, shocked users by redirecting harmless queries to discussions on ‘white genocide’ in South Africa.
Over the last few days, users have noticed an eye-opening trend in the responses from Grok, the AI bot installed on the social media platform X
Elon Musk's Grok AI chatbot on X stunned users by responding to unrelated questions with racially charged commentary about South Africa, highlighting ongoing concerns about bias, reliability, and hallucinations in generative AI tools.