Meta has announced a definitive date for a significant policy change in Australia. From August 31, 2024, the social media giant will begin actively identifying and removing Australian users under the age of 16 from its flagship platforms, Instagram and Facebook.
This move marks a major shift in how the company manages underage users and is a direct response to strengthening government regulations, setting a potential precedent for other countries around the world.
Why Is Meta Removing Underage Users in Australia?
This proactive measure is a direct response to the Australian government’s upcoming Online Privacy Code. The new legislation places a high level of responsibility on tech companies to protect the data of children.
Under the code, if a platform cannot verify with a high degree of certainty that a user is over 16, it must treat them as a child and provide the highest level of privacy protection. Faced with the logistical challenge of managing this on a massive scale and the risk of significant fines for non-compliance, Meta has opted for a clearer boundary: removing users it identifies as being under 16.
A Meta spokesperson stated, “We are committed to complying with our legal obligations and creating safe online experiences. This proactive step in Australia reflects our dedication to protecting young people on our platforms.”
How Meta‘s New Age Verification Will Work
The days of simply ticking an “I am over 13” box are numbered. Meta plans to deploy a sophisticated, multi-pronged age-verification system to enforce its new policy.
This system will include:
* AI-Powered Analysis: Technology will analyse signals like posts, connections (e.g., the age of a user’s friends), and other account activities to flag users who are likely underage.
* Video-Selfie Verification: In partnership with third-party companies like Yoti, users flagged for an age check may be asked to take a short video selfie, which AI then analyses to estimate their age.
* Social Vouching: A system where a user can ask mutual adult friends to vouch for their age to confirm they meet the requirement.
While no system is perfect and tech-savvy teens may seek workarounds, Meta‘s investment in these tools signals a clear shift towards more robust enforcement.
A Global Precedent? What This Means for Other Countries
While this policy is currently specific to Australia, it is being watched closely by governments and regulators worldwide. Many countries, including those in the EU and parts of the United States, have their own stringent data privacy laws regarding minors, such as the GDPR’s rules on “verifiable parental consent.”
If Meta’s removal strategy in Australia proves effective at satisfying regulators, it could become a blueprint for compliance in other markets with similar privacy laws. For parents, this may be welcome news amid growing concerns over youth mental health and online safety. For millions of young users globally, it could signal the end of an era.
The message is clear: the unregulated early days of social media are over. As governments increase their oversight, tech companies are being forced to build the fences they were long criticized for neglecting. Australia is the first major testing ground for this new, stricter approach.
