Warning on AI-Generated Explicit Images Raises Concerns for Female Students

Kevin Lee Avatar

By

Warning on AI-Generated Explicit Images Raises Concerns for Female Students

Raffaele Ciriello, a senior lecturer in information systems at the University of Sydney, has made an ominous warning. In particular, he brings attention to the disturbing increase of digitally created smut images aimed at women and young female students. His remarks come in the wake of an active police investigation. Police are investigating allegations that sexualised images of female students from a western Sydney high school have been shared online. This case is not an outlier. Ciriello said the news was indicative of a particularly alarming trend occurring across Australia.

The dangerous trend of artificial intelligence being used to develop images like this indicates a bigger problem in our society. Of the victims affected by malicious AI technology, the overwhelming majority are women, Ciriello emphasized. Yet the research demonstrates that at least 95 to 99 percent of these so-called targets belong to this demographic. He called this incident “another shocking iteration” of the epidemic of gendered violence, highlighting a continued need for focus, awareness, and action.

The Scale of the Problem

Ciriello noted that there are “hundreds if not thousands of documented cases in Australia alone and many more abroad.” This shocking statistic illustrates just how widespread the problem is. A teenage male student from a Sydney secondary school the law was piloted in referred multiple explicit images received to his school under. This incident has now prompted over a dozen parents to register complaints at the Eastwood Police Station.

“The research is actually pretty clear that at least 95 to 99 percent of targets are women,” Ciriello stated. This stark revelation points to the unsettling reality that AI technology can be weaponized against vulnerable individuals, particularly young women.

Legal Ramifications and Government Response

In the face of such emerging opposition, Federal Attorney-General Michelle Rowland has placed her foot, hard, down. So she declared that anyone who generates deep fake pornography of minors would face harsh punishment, including up to 15 years in prison. “The Albanese government is committed to protecting children from image-based abuse,” Rowland affirmed, emphasizing the government’s dedication to safeguarding young people from such violations.

Ciriello speculated that existing laws from the early 20th century prohibiting impersonation of someone else might apply to these emerging technologies. These tools facilitating these types of abuses, Mr. Adozo pointed out, are increasingly infiltrating society and business. There’s a need for new legal frameworks to address the distinctive issues that AI brings.

Understanding the Motivations

Digital entrepreneur Ciriello detailled the inspiration behind ‘bad actor’ content. He found that most criminals were opportunistic, acting in the heat of the moment, motivated by impulses related to sexual gratification or other deranged objectives. “They often do that out of something silly, they’re not really thinking or they’re just seeking sexual gratification but it can be more sinister than that … it’s a continuation of gendered violence,” he explained. Here are his insights on what’s fueling this disturbing trend.

Kevin Lee Avatar
KEEP READING
  • NT Chief Minister Claims Crime Rates Have Fallen Despite Local Concerns

  • The Silent Shift in Australian Banking: A Growing Concern for Older Australians

  • Charleville Mayor Advocates for New Hospital Following Amoeba Alarm

  • Ka Ying Rising Dominates The Everest to Solidify Sprinting Supremacy

  • OpenAI’s Subpoenas Ignite Controversy Among AI Safety Advocates

  • Regional Banks Face Scrutiny as Wall Street Reacts to Rising Bad Loans