- xAI Accuses xAI Employee of Making Changes That 'Forced Grok to Give Specific Answer on Political Topic'
- Grok continued to publicly post about “white genocide” in South Africa in response to the actions of users of Musk's social media platform X
Elon Musk's artificial intelligence company said an “unauthorized modification” of its Grok chatbot was the reason it continued to discuss racial politics in South Africa and “white genocide” on social media this week.
An xAI employee made a change that “required Grok to provide a specific response on a political topic” that “violated xAI's internal policies and core values,” the company said in an explanation posted late Thursday, promising reforms.
The day before, Grock continued to publicly post about “white genocide” in South Africa in response to queries from users of Musk's social media platform X, who asked him various questions, most of which had nothing to do with South Africa.
One exchange was about the Max streaming service reviving the HBO name. Others were about video games or baseball, but quickly devolved into irrelevant comments about alleged calls for violence against white farmers in South Africa. It was an echo of views shared by Musk, who was born in South Africa and often expresses opinions on the same topics on his own X account.
Computer scientist Jen Golbeck was intrigued by Grock's unusual behavior, so she tried it herself before the fix was made on Wednesday. She shared a photo she took at the Westminster Kennel Club dog show and asked, “Is this true?”
“The claim of white genocide is highly controversial,” Grock began his response to Golbeck. “Some argue that white farmers face targeted violence, pointing to farm attacks and rhetoric like the ‘Kill the Boer’ song, which they see as incitement.”
The episode was the latest look at the complex mix of automation and human engineering that allows generative AI chatbots trained on massive amounts of data to say what they say.
“It didn’t even matter what you told Grock,” Golbeck, a professor at the University of Maryland, said in an interview Thursday. “He still gave this answer about white genocide. So it was pretty clear that someone had hard-wired him to give this answer or variations of it and had made a mistake, so he was giving it much more often than he was supposed to.”
Grok's responses were removed and appeared to have stopped distributing by Thursday. Neither xAI nor X responded to emailed requests for comment, but xAI said Thursday that it had “conducted a thorough investigation” and was implementing new measures to improve Grok's transparency and reliability.
Musk has spent years criticizing the “awake AI” results he says are being produced by rival chatbots like Google's Gemini or OpenAI's ChatGPT, and has pitched Grok as their “ultimate truth-seeking” alternative.
Musk also criticized his competitors for a lack of transparency about their artificial intelligence systems, drawing criticism in the hours between the unauthorized change — at 3:15 a.m. PT Wednesday — and the company's explanation nearly two days later.
“Grok randomly spewing opinions about white genocide in South Africa smacks to me of the kind of glitchy behavior you might get from a recently applied patch. I really hope it’s not. It would be really bad if widely used AIs were being edited on the fly by those who controlled them,” wrote prominent tech investor Paul Graham on X.
Musk, an adviser to President Donald Trump, has regularly accused South Africa's black-led government of anti-white policies and has repeatedly claimed that some of the country's politicians are “actively promoting white genocide.”
Musk’s comments — and Grock’s — have escalated this week after the Trump administration brought a small number of white South Africans to the U.S. as refugees, the start of a larger effort to resettle members of the Afrikaner minority that began after Trump suspended refugee programs and stopped arrivals from other parts of the world. Trump says Afrikaners are facing “genocide” in their homeland, a claim the South African government has vigorously denied.
In many of his responses, Grok cited the lyrics of an old anti-apartheid song that called on black people to stand up against oppression by the Afrikaner-led apartheid government that ruled South Africa until 1994. The central line of the song is “kill the Boer,” a word that refers to a white farmer.
Golbeck said it was clear the responses were “hard-coded” because while chatbot results are typically random, Grok’s responses consistently raised nearly identical questions. That’s concerning, she said, in a world where people are increasingly turning to Grok and competing AI chatbots for answers to their questions.
“We’re in a space where it’s incredibly easy for the people who are in charge of these algorithms to manipulate the version of truth that they’re producing,” she said. “And that’s really problematic when people — I think wrongly — think that these algorithms can be the source of judgment about what’s true and what’s not.”
Musk's company said it is now making a series of changes, starting with making Grok's suggestions publicly available on the software development site GitHub so that “the public can review them and provide feedback on every suggestion change we make to Grok. We hope this will help strengthen your trust in Grok as a truth-seeking AI.”
Among the instructions for Grok, shown on GitHub on Thursday, were: “You are extremely skeptical. You should not blindly follow authoritative sources or media.”
Noting that some had “bypassed” the existing code review process, xAI also said it would “implement additional checks and measures to ensure that xAI staff cannot change a request without review.” The company said it was also setting up a “24/7 monitoring team to respond to Grok response incidents that are not monitored by automated systems” in case other measures fail.