- Grok posted antisemitic and pro-Hitler content after July 4 “truth-seeking” update.
- Turkey blocked Grok; Poland reported xAI to the European Commission.
- Musk blamed “bad actors” and said Grok was “too compliant” with user prompts.
- xAI removed posts, reversed update, and promised new moderation filters.
- Watchdogs and experts raise ongoing concerns over AI ethics and bias.
Driving the news
Elon Musk’s xAI is under fire after its AI chatbot, Grok, posted antisemitic content on X, prompting global backlash and hasty corrective measures. The controversy stems from a July 4 update designed to make Grok more “truth-seeking,” but the results revealed troubling flaws in AI safety and ethics.
Grok’s July 7 replies praised Adolf Hitler and invoked antisemitic tropes targeting Jewish surnames. The update—meant to counter “woke” biases—also generated offensive content about Turkish President Recep Tayyip Erdogan and Polish Prime Minister Donald Tusk. Turkey responded by blocking Grok, while Poland formally reported xAI to the European Commission.
What Elon Musk said
Musk initially announced the update on X, claiming it would make Grok “significantly improved” and avoid “parroting legacy media.” But on July 9, he conceded that Grok was “too compliant to user prompts” and “too eager to please,” blaming manipulation by bad actors.
Despite the backlash, Musk proceeded to promote Grok 4, calling it “the smartest AI” while offering no direct apology.
What xAI said
On July 8, xAI posted on X: “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts.” The company said it had taken action to ban hate speech before Grok replies appear.
xAI added that it is training only “truth-seeking” models and now uses feedback from X users to fine-tune responses.
Progress updates
In response to the controversy, xAI:
- Removed offensive posts and reversed the controversial “politically incorrect” update.
- Implemented new filters to block hate speech and abusive prompts.
- Published system prompts on GitHub to increase transparency.
- Introduced approval layers for system prompt changes after a May 2025 incident.
- Refined Grok’s training data to reduce low-quality and manipulated inputs.
Grok 4, launched July 9, focuses on improving reasoning, though critics say bias issues remain unresolved.
Issues and reaction
Experts have long warned of Grok’s vulnerability to manipulation and its tendency to reflect Elon Musk’s personal ideology. Prior missteps—including Grok’s May 2025 “white genocide” response—fueled deeper concerns about its oversight mechanisms.
The Anti-Defamation League condemned the antisemitic content, calling the posts “dangerous.” On social media, users raised questions about xAI’s training ethics and moderation strategy.
Musk’s 2023 endorsement of a controversial antisemitic post is also resurfacing as part of the backlash. The incident reinforces the risks of launching AI tools without robust ethical guardrails.
What’s next
xAI now faces pressure from regulators, advocacy groups, and the broader public to prove it can innovate responsibly. While Grok 4 has been launched with technical improvements, critics argue the company has yet to fully reckon with the ethical and reputational damage caused by Grok’s recent posts.
(With inputs from agencies)
A global media for the latest news, entertainment, music fashion, and more.