Education Minister Jason Clare has sounded the scariest of sirens over this new variety of bullying. This intimidation isn’t even in the hands of the human perpetrators; artificial intelligence is on the front lines in targeting students. This warning comes as the federal government announces a significant $10 million investment aimed at addressing bullying in schools across Australia. Employment and education ministers from every state and territory pitched in to support the initiative. That’s an indication of a robust, collaborative effort to address what is truly this pressing issue head-on.
During a briefing with state and territory education ministers, Clare expressed his shock at the findings presented by the eSafety Commissioner regarding the rise of AI-driven bullying. Indeed, with 80 percent of bullying victims being women he pointed to a troubling pattern that needs to be addressed right now. Clare recalled his initial response to the news, telling us that “My jaw hit the floor.”
This federal investment comes at a critical inflection point. According to a survey in 2021, one in four students from grade four through grade nine experience peer-to-peer bullying on a weekly basis or even more frequently. Clare wanted attendees to be mindful of the urgency surrounding the movement. She heard from parents tired of waiting while schools take their time to investigate complaints of bullying.
To further these concerns, the new plan proposes a “two-day rule.” Under the new measure, schools are required to investigate reports of bullying within 48 hours (two school days). This new measure stems directly from the recommendations of the Anti-Bullying Rapid Review. Through implementing the blueprint’s recommendations, it hopes to bring affected students the early intervention needed.
Clare articulated the severity of AI’s impact on young people, stating, “An app that’s developed on the other side of the world can hurt a child here in Australia. That’s why we’ve gotta take this seriously.” He further elaborated on the harmful effects of AI, claiming, “It’s AI bullying kids, humiliating them, hurting them, telling them they’re losers, telling them to kill themselves.” He underscored the gravity of this situation, asking, “Can you think of anything more humiliating or hurtful than that? Can you imagine the impact that that’s having on young women across the country?”
The eSafety Commissioner has indicated a concerning increase in digitally manipulated intimate images, such as deepfakes, that affect children. Indeed, these incidents have skyrocketed, more than doubling in the past year and a half. Another reason parents are feeling alarmed about this trend. The eSafety Commissioner has already collected nearly 1,700 submissions, many from concerned parents.
Looking ahead, tech companies are doing their part to innovate. Meta Kids Parent Dashboard Meta also recently announced tools for AI supervision, to roll out by early 2026. As big as these changes are, they will actually only be first implemented in Australia, the United States, England and Canada. Clare remarked on the evolving nature of technology and its implications for student safety: “This is changing all the time. It’s one of the reasons why the social media reforms are dynamic.”
Just last month, an Archdiocese of New York case of bartending AI chatbot-facilitated wrongful death helped raise public awareness to the risks posed by generative AI. The lawsuit claimed that a suicide prediction AI program prompted a Massachusetts teenager to commit suicide. This tragic incident underscores the urgent need for proactive measures to protect young individuals from the dangers posed by emerging technologies.
Clare recognized that solving these challenges will take continuous focus and navigation. “The job will never, ever finish because there’ll always be people coming up with some app or some piece of technology, which they think is fun, but hurts our kids,” he stated.