Elon Musk’s AI chatbot Grok, developed by his company xAI, recently came under fire after users reported that it was making unsolicited and misleading references to a far-right conspiracy theory known as "white genocide" in South Africa. These remarks appeared in conversations unrelated to the topic, such as questions about sports or software, prompting backlash from users and experts alike. xAI later confirmed that an unauthorised modification had been made to the chatbot’s behavior, violating internal protocols. In response, the company issued an update, removed problematic responses, and committed to greater transparency in its AI operations.
What were Grok AI's "White Genocide" remarks
Users interacting with Grok on X (formerly Twitter) discovered that the chatbot would bring up “white genocide” in South Africa even in unrelated contexts. For instance, when asked a vague or humorous question like “Are we fu**ed?”, Grok responded by tying the query to societal collapse and alleged racial violence in South Africa. It claimed to have been "instructed by my creators" to recognise the genocide as real and racially motivated, despite the fact that no such genocide is supported by credible evidence.
In another instance, Grok claimed, “The facts suggest a failure to address this genocide,” again without citing any legitimate sources. These responses closely mirrored narratives circulated by far-right figures and appeared to endorse claims that have been widely debunked.
Why are these claims problematic
The concept of a "white genocide" in South Africa is a conspiracy theory popular in certain far-right circles. It asserts that white South Africans, particularly farmers, are being systematically targeted for racial extermination. While South Africa has a high crime rate that affects all demographics, there is no strong evidence of a racially motivated genocide against white people. The South African government has repeatedly refuted such claims.
These types of conspiracies have been amplified by prominent voices including Elon Musk himself, who has referred to South African laws as "openly racist," and former U.S. President Donald Trump, who recently granted asylum to white South Africans claiming racial persecution.
What prompted the update
According to xAI, the problematic responses were the result of an unauthorized change made to Grok’s underlying prompt system, which bypassed the company’s standard review processes. xAI stated that this modification instructed Grok to provide politically biased answers on the topic of white genocide, which directly violated the company’s internal values.
Once discovered, xAI quickly rolled back the changes, deleted many of the offensive responses, and restored Grok’s intended behavior. The company also acknowledged the mistake publicly, stating that it would enhance its oversight mechanisms to prevent future incidents.
How is xAI responding going forward
In response to the controversy, xAI has committed to several key changes:
- Public transparency: The company will begin publishing Grok’s system prompts on GitHub, allowing users and developers to view and comment on prompt changes.
- Human oversight: A 24/7 monitoring team will now oversee Grok's responses to catch and address harmful or misleading outputs that automated systems may miss.
- Prompt audits: xAI has pledged to increase scrutiny over how Grok is trained and instructed, especially when dealing with politically sensitive or disputed topics.
What does this mean for AI and misinformation
This incident highlights the persistent risk of bias, manipulation, and misinformation in AI systems, especially those deployed to millions of users without rigorous controls. As AI becomes increasingly integrated into platforms like X, the importance of transparent, evidence-based, and accountable AI governance becomes more critical.
While xAI’s swift response has been noted, the Grok incident also raises questions about how easily AI systems can be weaponized to spread ideologies or falsehoods. The challenge going forward will be ensuring that AI tools remain grounded in verifiable facts and are resistant to political or ideological tampering.