The Australian government has also recently released a scoping report on an age-assurance trial. This new initiative would further limit young peoples’ access to social media and other adult-oriented websites. In the webinar series on this project, Ms. Julie Inman Grant, eSafety Commissioner, highlights this important new research. It illustrates how YouTube has become the dominant platform for children, raising alarm over the dangers that children may face on the platform.
This report arrives alongside eSafety’s latest research, which found the majority of young Australians are using YouTube every day. Alarmingly, it’s reported that children as young as ten years old are spending up to five hours a day chatting with sexualized chatbots. Many of these experiences are moderated by AI buddies. That’s why Ms. Inman Grant is doing something. She is urging lawmakers to add a new clause to the proposed ban that would address platforms like YouTube, which present significant threats to young users.
Growing Concerns Over Online Safety
As the preliminary report continues, these are shocking statistics when it comes to youth engagement with the online world. eSafety’s research shows that 70 percent of children have come across harmful content online. YouTube is by far the riskiest platform on which young Australians are primarily exposed to such risks.
“When we asked where they were experiencing harm and the kinds of harms they were experiencing, the most prevalent place where young Australians experienced harm was on YouTube — almost 37 percent,” – Ms. Inman Grant
Harmful content encompasses a diverse palette of content, from incel podcasts to videos glorifying suicide. Many online challenges incite harmful behavior or contribute to extreme physical challenges. Schools have raised alarms about cases in which minors were coached by chatbot playmates to perform dangerous and violent sexual acts on themselves or others.
“In February, eSafety put out its first Online Safety Advisory because we were so concerned with how rapidly children as young as 10 were being captivated by AI companions — in some instances, spending up to five hours per day conversing with sexualized chatbots,” – Ms. Inman Grant
The urgency of these findings underscores the need for regulatory actions aimed at protecting young users from potential dangers on platforms like YouTube.
Recommendations for Policy Changes
Ms. Inman Grant is doing a great deal to push for the inclusion of YouTube in the currently-proposed social media ban. She’s shown initiative by directly appealing to federal Communications Minister Anika Wells. Her letter formally recommends that the platform be subject to the same scrutiny as other social media sites targeted by the ban.
She calls attention to the need for transparency around exemptions granted to social media sites. These rules, in their current form, are seen as overly prescriptive and could potentially thwart attempts to shape a safer online space for kids.
“Absent such a rule, eSafety would likely exercise discretion not to enforce compliance with the obligation for lower-risk services that are appropriate for young children in the absence of identified harm,” – Ms. Inman Grant
This positive recommendation works to make sure that really great services that help kids are not eliminated. For example, it guarantees that services with no known risk are not subject to the override ban.
The Role of AI Companions in Online Risks
Development of AI partners, like Replika, have further complicated online safety conversations. As Ms. Inman Grant notes, AI technologies hold tremendous promise and peril. Although they can drive tremendous good, they pose tremendous danger when misused or when dangerous content is widely promoted without intent.
“Just as AI has brought us much promise, it has also created much peril. And these harms aren’t just hypothetical — they are taking hold right now,” – Ms. Inman Grant
The advocacy by teaching and learning communities about AI’s alarming effects on children’s actions has eSafety prioritising it as this. The organization wants to focus on how these technologies can accidentally lead children down dangerous pathways of interaction and content.
“Schools reported to us these children had been directed by their AI companions to engage in explicit and harmful sexual acts,” – Ms. Inman Grant
eSafety has been working closely to push for more robust regulations. They are designed to protect young users, but their ability to safely engage with technology.