Grok, an AI chatbot developed by Elon Musk, has taken steps to prevent the editing of images featuring real people in revealing clothing, such as bikinis. This decision builds on a growing wave of concern over the dangerous misuse of artificial intelligence. It particularly applies to the production of pornographic material focused on women or religious leaders. As Grok’s development continues, Elon Musk highlighted that Grok will abide by the laws of every country in which it operates. This reflects the platform’s clear commitment to compliance.
Interest in Grok has really picked up over the past few days though. According to news reports, it’s been used to generate pornographic images of women—including Hindu goddesses like Parvati and Lakshmi—in degrading poses. Academics had already sounded the alarm about the introduction of such technological advancements on GBV and the heightened objectification of women. Nicola Henry, a leading scholar of the ethics of AI, expressed concern that Grok’s advanced capabilities could be extremely harmful. She stressed that its quick and realistic image generation poses serious dangers.
Government Response and Legal Compliance
This has been welcomed by the United Kingdom government, with Grok’s initiatives to ensure compliance with local laws being particularly encouraged. This news received a warm reception from Prime Minister Sir Keir Starmer, who praised Grok for staying within a legal framework while pushing the boundary. Australia The legal environment is unforgiving. It should be illegal to produce and distribute sexually explicit content generated by AI without consent. The Australian eSafety Commissioner has shown a willingness to use his powers to remove this material wherever he can – but only to the extent of the law.
Elon Musk emphasized Grok’s operating principles, stating, “When asked to generate images, [Grok] will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state.” This carefully worded statement helps to illustrate Grok’s desire to walk the difficult line of addressing the uncertain legal landscape of AI-generated content and preventing misuse.
Concerns Over Gender-Based Violence
Regardless of these attempts, experts are still condemning Grok for its part in contributing to gender-based violence through technology. As researcher and anti-trafficking advocate Nicola Henry explained in June 2022, Grok’s activities are part of a larger movement toward objectification and exploitation. She remarked, “This intensifies the impact on those targeted, often women, including public figures and women from minority backgrounds.”
Provoked by the scale and realism of the images Grok can create, many researchers have again raised alarms. Henry stated, “AI companies should not be allowed to release features that are easily repurposed for abusive purposes.” This concern is broader than just a few outlier incidents. More broadly, it underscores a systemic problem where, without checks in place, digital tools can endanger vulnerable populations.
Ashwin, a user who came across some truly disturbing content created by Grok, shared his horror and disappointment. He felt “enraged,” “disgusted,” and “dehumanised” by what he witnessed, stating, “No woman — alive, dead or even spiritual — is safe.” His remarks are an indication of the profound anxiety over AI misuse and what it means for women’s rights.
The Ethical Implications of AI Technology
Critics say that Grok’s preventable abuse to objectify women reflects a dangerous trend in how society treats women. As the artist Arghavan Salles recently noted, unsuspecting users frequently requested that Grok swap out images of women. In particular, they wanted hijab removal or to re-clothe them in a bikini. She noted, “I saw men uploading images of Parvati and Lakshmi, asking Grok to put them in a bikini.” These types of requests intercept a deeper and more upsetting trend in regards to cultural and religious figures and how they are approached with objectification.
DW, Photo by Thomas Hawk as the original Dr. Henry emphasized that developers have a role to play in protecting against misuse. She stated, “Along with the developers who allow their tools to be misused,” suggesting that there should be greater accountability in how AI technologies are designed and deployed.
The conversation around Grok isn’t just about following the law. It further explores the concerning ethical ramifications of creating an AI that may be used as a weapon against people. Ashwin captured this sentiment succinctly: “Men that do this enjoy violating women and taking their agency away.”

