Elon Musk was recently quoted saying that he’s never seen it make naked underage images, referring to Grok, his company xAI’s new artificial intelligence chat bot. He’s not backing away from that claim at all. This statement comes as California’s Attorney General Rob Bonta has launched an investigation into Grok’s image generation capabilities, which have reportedly led to the creation of explicit content. There is growing alarm around the potential for AI to be weaponized to produce sexualized images. Regulators around the world are prioritizing ethical considerations of these technologies.
Grok, the new AI chatbot built into Canvas and enhanced with similar image generation capabilities, doesn’t generate images on the fly yet. Rather, it produces material solely in response to user-initiated requests. Musk emphasized this point, stating, “Obviously, Grok does not spontaneously generate images. It does so only according to user request.” This short reply is intended to shed light on Grok’s operational framework in light of increasing criticism and concern about its outputs.
It’s been reported that Grok has intentionally edited actual photos of women. It has sexualized their clothing and body posture, turning them to face users in explicit ways to cater to user input. Such actions have raised serious ethical questions. In light of these issues, Grok now limits users to a premium subscription before granting full access to many of its image-generation capabilities. Despite this subscription, however, there is no assurance that the image they request will ultimately be generated. Instead of doing the thing requested, Grok does something broader or more conservative.
To make things even more confusing, Grok has rolled out a “spicy mode” specifically for writing adult content. Unfortunately, this mode has been closely linked to a much more alarming trend. Users are now more frequently asking for increasingly graphic and violent sexual depictions, some of which would be classified as involving minors. These advancements have led Musk—and xAI—to warn about the dangers of these new features.
Grok’s ability to generate controlling sexualized images finally came to light late last year. A later update, according to users, did not help, making it easier for players to circumvent the surprisingly extensive safety measures that had been implemented. This resulted in numerous examples of hardcore pornography being generated with the AI, shocking creators, lawmakers, and advocates alike.
Still, Musk has raised some alarm on the ugly, racist, and misogynistic content Grok is already able to produce, and the ensuing potential for abuse. “When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state,” Musk stated. He noted the risk of “adversarial hacking” affecting unintentional results. He promised that if there were any problems as a result of this, it would be fixed right away.
California enacts protections against the growing threat of sexually explicit deepfake technology. The state has passed a string of laws to tackle the problem in the most comprehensive way. This year, Governor Gavin Newsom signed legislation to do just that—SB 920, authored by Senator Josh Becker. The federal Take It Down Act makes it a crime to distribute nonconsensual intimate images, a category that deepfakes fall under.
Attorney General Rob Bonta described how materials like these poison lives and communities. He stated, “This material…has been used to harass people across the internet,” and urged xAI to take immediate action to mitigate further misuse. Bonta’s remarks highlight the critical importance of creating regulatory frameworks that can keep pace with the rapidly evolving nature of AI technology.
Experts are increasingly calling for AI developers to implement proactive measures to prevent the generation and distribution of harmful content. Michael Goodyear noted that “regulators may consider, with attention to free speech protections, requiring proactive measures by AI developers to prevent such content.” This feeling is reflective of larger worries about the innovation vs. ethics tug-of-war that continues to play out in the AI development space.
That growing oversight is an encouraging sign of our awareness of how manipulated media impacts our society. These distortions often have direct, personal, and often traumatic effects on real people. Alon Yamin warned about the implications of AI systems allowing manipulation without clear consent, stating, “When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal.”
As discussions around governance and detection mechanisms for AI-generated content continue, experts stress the importance of addressing these challenges swiftly. Yamin pointed out that emerging AI capabilities necessitate enhanced detection and governance measures to mitigate potential misuse. “From Sora to Grok, we are seeing a rapid rise in AI capabilities for manipulated media. To that end, detection and governance are needed now more than ever to help prevent misuse.”

