Millions of reports of AI-enabled abuse haven’t stopped xAI, Grok’s parent company, from rolling out new and more powerful AI tools. On Sunday, xAI introduced a new version of its AI generative video model on X, Grok Imagine 1.0.

Nudification requests aren’t allowed by other AI models, but Grok has no qualms about them: Its “spicy mode” can make suggestive and provocative imagery. What happened, however, went far beyond that. It was publicly shared, unfiltered, image-based sexual abuse.

Grok made 1.8 million deepfake sexual images over nine days in January, according to a report from The New York Times, comprising 41% of the total images made by Grok. A separate study from The Center on Countering Digital Hate estimated that Grok made approximately 3 million sexualized images over 11 days, with 23,000 of those deepfake porn images featuring children.