Australian Member of Parliament Kate Chaney is on the verge of making her move to protect kids from the harms of artificial intelligence (AI). In the coming weeks, she plans to introduce a bill that would criminalize the use of technology specifically engineered to produce child abuse material. This legislative initiative is a reaction to growing alarm about the hazards of AI-generated content. As former police detective inspector Jon Rouse underlined, this dangerous content has the potential to cause harm to actual victims. The bill would fill significant legislative gaps that exist today to keep children from being exploited in the first place.
Chaney intends to bring this bill back to parliament in the very near future. This bill will establish a new offense of using carriage services to download, access, supply, or otherwise enable technologies that generate child abuse material. The proposed legislation comes with a maximum penalty of 15 years in prison for violators. Chaney, who now represents Curtin with an independent party room, said Australia needed to urgently plug holes in the country’s laws that leave children at risk.
Growing Concerns Over AI Technologies
Our new Director, Jon Rouse, jumped right in to the fray by taking part in a recent roundtable discussion on this urgent topic. He said chilling things about the future of AI-produced abuse material. He pointed out that this kind of content generally depends on photos or videos of actual victims. This practice adds to the trauma people face.
“While existing Australian legislation provides for the prosecution of child sexual abuse material production, it does not yet address the use of AI in generating such material.” – Professor Rouse
The growing democratization of these tools – making them incredibly easy to use – is a much more serious danger. In fact, a handful of AI technologies became the internet’s latest viral sensations, racking up millions of visits. As these tools become more readily available, so too does the opportunity for misuse.
Chaney pointed out that offenders would have access to these tools to produce abuse material on demand, including images of select children. She stressed the need for immediate legislative action to combat this alarming trend:
“These tools enable the on-demand, unlimited creation of this type of material, which means perpetrators can train AI tools with images of a particular child, delete the offending material so they can’t be detected, and then still be able to generate material with word prompts.” – Kate Chaney
Legislative Action and Government Response
Last year, former industry minister Ed Husic initiated a parliamentary inquiry. It revealed shortcomings in current laws aiming to protect children from the dangers of AI. As Chaney points out, filling these gaps should be the new federal government’s first priority this parliamentary term. She articulated her commitment to ensuring that the legislation is passed swiftly, stating:
“This is going to have to be an urgent focus for this government, regulating the AI space.” – Kate Chaney
Attorney-General Michelle Rowland recognised that it was indeed essential to close all these legal loopholes. She reiterated how protecting those most vulnerable, particularly those who are less able to protect themselves, should be a constant priority for any administration.
“Keeping young people safe from emerging harms is above politics, and the government will carefully consider any proposal that aims to strengthen our responses to child sexual exploitation and abuse.” – Michelle Rowland
In her recent meeting with Rowland’s office, Chaney focused on the value of acknowledging current gaps in legislation. The consensus among members of the roundtable was clear: there is no public benefit in delaying regulation on child abuse generators while waiting for a comprehensive approach to AI governance.
The Path Forward
The proposed legislation is just a drop in a much larger government bucket. This new effort seeks to address the public safety hazards posed by increasingly advanced, fast-moving AI technologies. The U.S. government is expected to soon release an overall U.S. strategy for regulating AI. This comprehensive plan will address multiple issues, with the safety of our kids as the primary goal.
She acknowledged that any effort to regulate this AI technology is complicated. That doesn’t mean we can’t, as she asserted, begin making short-term moves to make sure our existing laws remain robust to new dangers.
“Clearly needs to be done urgently and I can’t see why we need to wait to respond to this really significant and quite alarming issue.” – Kate Chaney
She also acknowledged the complexities involved in regulating AI technology but insisted that immediate steps could be taken to ensure current laws remain effective in light of new risks.
“I recognize the challenges of regulating AI — the technology is changing so fast it’s hard to even come up with a workable definition of AI — but while we are working on that holistic approach, there are gaps in our existing legislation we can plug to address the highest-risk-use cases like this, so we can continue to build trust in AI.” – Kate Chaney