Recent findings from a survey conducted by a British firm have highlighted significant reliability issues with age assessment technologies used for social media platforms. The report uncovers that these technologies are far more unreliable for various demographics and frequently provide inaccurate age predictions. This raises concerns about their effectiveness in protecting younger users from inappropriate content, especially as governments consider implementing stricter age verification policies.
Our survey results suggest that age assessment technologies have more difficulty identifying the ages of girls than boys. Even more troublingly, when they’re used to evaluate non-white faces, they actually penalize them more, highlighting a systemic bias that could result in discriminatory outcomes. These technologies can yield an error margin of two to three years. In a few instances, estimates may be off by about four years in either direction from the true age cutoff. This murky area creates additional hurdles to verification. It too has profound and unaddressed implications regarding the trustworthiness of utilizing these kinds of technologies for age assurance.
Reliability Issues with Age Assessment Technologies
To address this knowledge gap, the report dives into four key findings about how well age assessment technologies work. Importantly, false negatives are a big problem. These are instances where people over the age of 16 are wrongly counted as underage. False positives happen when those under 16 are wrongly identified as overage. Each type of error causes deadly problems. They risk either unnecessarily over-blocking their users or inadvertently putting minors in the path of harmful content.
The survey highlights that the false positive and false negative rates for age verification using official documents are around three percent. Although this rate is low, the cost of failure is high, especially when it comes to keeping children safe online. Anika Wells, the Minister for Communications, focused on the report’s key findings. She said that technology is important to develop, but cannot be the only method used for age assurance.
“This report is the latest piece of evidence showing digital platforms have access to technology to better protect young people from inappropriate content and harm.” – Anika Wells
The Broader Context of Age Assurance
The survey was conducted before the introduction of a UK government policy requiring under-16s to be blocked from accessing age-inappropriate content on social media. This research comes at an important moment, with conversations developing around the issues of age assurance and online safety more broadly. The report sought to take a wider view of age assurance more generally. It went beyond focusing on the social media ban’s effects on those under 16. This comprehensive approach allows for a better understanding of the challenges and potential solutions in ensuring safer online environments for younger users.
In her speech, Julie Inman-Grant, the eSafety Commissioner, underscored the need for “trustworthy” age verification solutions. She emphasized, in regard to supply chain fragility, that although many technologies are available and emerging, there is no silver bullet solution.
“While there’s no one-size-fits-all solution to age assurance, this trial shows there are many effective options and it is important that user privacy can be safeguarded.” – Anika Wells
The report determines the leading third-party verification providers who are able to provide trustworthy age assurance while minimizing holding unnecessary user data. This change goes a long way toward balancing user privacy with keeping us safer online. The report notes a significant challenge: many of these solutions function in isolation and lack interoperability across different platforms.
Need for Coordination Among Tech Providers
These results point to the need for collaboration between major tech players in order to develop a coherent ecosystem-wide age assurance model that works best. Digital communications specialist Lisa Given cautioned about possible ramifications arising from the status quo.
“We are going to see a messy situation emerging immediately where people will have what they call false positives [and] false negatives.” – Lisa Given
The reliance on voluntary measures by tech platforms may leave many children inadequately protected or inconsistently treated across different services. The implementation of effective age assurance systems ultimately hinges on the willingness of major tech companies to collaborate and share control over these processes.
This recognition makes it more important than ever for social media companies to adopt robust safety protocols. They need to be mindful of user privacy.
“The technology exists right now for these platforms to identify under-16s on their services.” – Julie Inman-Grant
This acknowledgment underscores the urgency for social media platforms to adopt robust measures that align with safety standards while simultaneously respecting user privacy.