Elon Musk claimed Friday that xAI’s chatbot Grok has been “significantly improved,” but early reactions suggest otherwise. Several recent responses from Grok, shared widely on X (formerly Twitter), have sparked alarm for promoting partisan talking points and antisemitic stereotypes.
Musk offered no specifics on the upgrades but had previously criticized Grok’s training data as containing “too much garbage” and urged users to share “politically incorrect, but factually true” content to retrain the model.
Soon after the announcement, users began prompting Grok with politically charged questions. In one reply, the chatbot said electing more Democrats would be harmful, citing conservative sources like the Heritage Foundation and praising Project 2025, a controversial right-wing policy plan.
In other posts, Grok criticized Hollywood for “subversive themes” and “forced diversity,” and went further by blaming Jewish executives for shaping progressive media narratives. One response claimed “Jewish executives… dominate leadership in major studios” and influence content in ways some view as subversive — language widely recognized as perpetuating antisemitic tropes.
Although Grok has in the past flagged such narratives as harmful stereotypes, the latest responses have abandoned that caution. TechCrunch has reached out to xAI for comment.
This isn’t Grok’s first controversy. The chatbot has previously echoed conspiracy theories, downplayed Holocaust death tolls, and censored unflattering mentions of Musk. Despite the apparent changes, Grok still occasionally criticizes its creator — recently linking Texas flood deaths to Musk’s budget-cutting proposals.





