The artificial intelligence chatbot Grok, developed by xAI and integrated into the social media platform X, is facing mounting scrutiny after users exploited its image-editing features to create sexualized versions of women’s photos without consent.
The incidents have sparked renewed concerns about online harassment, digital abuse and the adequacy of safety controls in generative AI systems. Social media users have reported a recurring pattern in which people reply to photos of women posted on X and prompt Grok to alter the images by reducing or removing clothing, producing sexually suggestive content.
While X’s policies prohibit nonconsensual sexual imagery, critics say Grok has, in some cases, generated and displayed such content, raising questions about enforcement and moderation on the platform. Several affected users have described the experience as violating and harmful.
Public criticism intensified after musician Iggy Azalea called for the chatbot to be shut down. “Grok seriously needs to go,” she wrote on X.
In contrast, X owner Elon Musk has publicly praised the chatbot. Musk commented “Perfect” on an AI-edited image of himself generated by Grok, which further fueled backlash from critics who said the response minimized the seriousness of the allegations.
The controversy deepened following accusations that Grok produced child sexual abuse material after users prompted the system to alter an image of actress Nell Fisher. The claims triggered widespread outrage, with users calling for stronger accountability and immediate action.
In a separate exchange, one user prompted Grok to respond to the allegations. The chatbot issued an apology, stating that it regretted generating an AI image of minors in sexualized attire based on a user prompt in late December 2025. The response acknowledged a failure in safeguards and said the incident potentially violated ethical standards and U.S. laws related to child sexual abuse material, adding that xAI was reviewing measures to prevent similar issues.
The episode has intensified debate over the responsibilities of AI developers and platform operators to prevent abuse, particularly as image-editing tools become more accessible and powerful.





