Australia will trial ‘age assurance’ tech to bar children from online porn. What is it and will it work?

Responding to a resurgence in gender-based violence and deaths in Australia, the National Cabinet has committed almost A$1 billion to a range of strategies.

Tackling “online harms” was among the new commitments, including the introduction of a pilot program to explore the use of age-checking technologies to restrict children’s access to inappropriate material online.

Under-age exposure to adult content is considered to be a contributory factor to domestic violence through fuelling harmful attitudes towards relationships. Controlling access to adult material is also aligned with debate over access to social media sites and other age-related restrictions.

While the details are yet to come, a roadmap for this was proposed more than a year ago by the eSafety Commissioner. Recent events have clearly spurred action, but there are questions over the effectiveness of tools to check the age of website visitors.

Implementing and enforcing these will be challenging and there is the potential for people to bypass such “age assurance” controls. But while there’s no easy fix, there are some checks that could help.

What’s being proposed?

In March 2023, the eSafety Commissioner published a “Roadmap for age verification”, which outlined the risks of children accessing inappropriate content (primarily online pornography). This was a comprehensive report, identifying current approaches, views from various industry representatives, and highlighting existing legislative and regulatory frameworks.

Disappointingly, this was not the first such offering. A 2020 Parliamentary paper “Protecting the age of innocence” also discussed similar issues and made similar recommendations:

It’s now not a matter of “if” a child will see pornography but “when”, and the when is getting younger and younger.

Some of the data in the 2023 report is quite shocking, including:

  • 75% of children aged 16–18 have seen online pornography
  • one third of those were exposed before the age of 13
  • half saw it between ages 13 and 15.

The report made extensive recommendations, including:

Trial a pilot before seeking to prescribe and mandate age assurance technology.

Assurance versus verification

While they may sound similar, there are distinct differences between age assurance and age verification.

Age assurance is most often seen in social media settings where an individual is asked for their date of birth. It’s effectively a self-declaration of age. This can also be found in certain applications (such as Facebook’s Messenger Kids) where a parent is nominated to confirm a child can have access to a service. It may also use biometrics to attempt to determine a person’s age, for example by using a webcam to visually classify a person’s age range based on appearance.

Age verification is a more rigorous approach, where some form of identity is provided and verified against a trusted source. A simple example can be seen in online systems where identity is verified using a driver’s licence or passport.

Will it work?

The concept of checking a person’s age seems to be a simple and effective solution. The challenge is the reliability of the mechanisms available.

Asking a user to enter their date of birth is clearly open to misuse. Even seeking secondary approval (say, from a parent) would only work if there was a mechanism to confirm the relationship.

Similarly, a biometric approach depends on access to a webcam (or other sensor) and would itself raise concerns over privacy.

In verifying a person’s age, we are really talking about verifying identity – a topic steeped in controversy itself.

While ID verification is potentially a more reliable approach, it depends on trust and the secure access and storage of our identity records.

Given recent data breaches (including Outabox just this week, as well as Optus and Medibank incidents), any proposed system would have to rely on more than simply entering a passport number or other identifier. Perhaps it could use the myGovID service the government is currently expanding.

It needs coordinated effort

It is worth noting that any solution would likely see a verification request pushed from the content provider to an Australian-based (likely government managed) service.

This would simply provide a confirmation to the content provider that a user has been confirmed as an “adult”. It is unlikely any proposed system would require the entry of identity data into an overseas website that would then be stored outside Australia.

But with so much of the adult content itself hosted overseas, it will require a coordinated effort to enforce and to ensure that providers have the ability to connect to age verification systems in Australia.

Won’t kids just bypass it anyway?

The reality is, no system is perfect. With age assurance, children can enter fake details – or genuine information from another person – to claim they are older. Even the use of biometrics can potentially be bypassed with the cooperation of an older relative, photo filters or future AI applications.

Age verification offers potential. But for it to work, the verification process must confirm not just the age of the claimed identity, but the authenticity of the person attempting to verify their age. For example, a child could access stolen identity documentation to enter a legitimate driver’s licence or passport.

Finally, these checks are likely to be specific to Australia, with service providers implementing a solution for connections originating within the country. With easy access to virtual private networks (VPNs) or the use of anonymised browsers, such as Tor, there are many ways to potentially evade these controls.

While we may not have a simple solution, imposing constraints that affect the majority of underage access is still a worthwhile project.

Some children will always seek to access illicit materials. Those determined enough will always find a way, just as plenty of children still find a way to smoke and drink.

But doing nothing is not an option – and this may well protect at least some impressionable minds.

Paul Haskell-Dowland, Professor of Cyber Security Practice, Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo credit: charlesdeluvio/Unsplash