Australia’s online watchdog has accused the world’s biggest social platforms of not adequately implementing the country’s ban on under-16s using their platforms, despite legislation that came into force in December. The eSafety Commissioner, Julie Inman Grant, has raised “serious concerns” about adherence by Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including allowing banned users to repeatedly attempt age verification and insufficient measures to prevent new accounts. In its first compliance report since the ban took effect, the regulator found numerous deficiencies and has now shifted from observation to active enforcement, cautioning that platforms must show they have put in place “appropriate systems and processes” to prevent children under 16 from accessing their services.
Non-compliance Issues Uncovered in Opening Large-scale Review
Australia’s eSafety Commissioner has detailed a worrying pattern of non-compliance amongst the world’s most prominent social media platforms in her first formal review since the ban took effect on 10 December. The report demonstrates that Meta, Snap, TikTok, YouTube and Snapchat have jointly neglected to establish appropriate safeguards to stop minors from using their services. Julie Inman Grant raised significant concerns about structural gaps in age verification systems, noting that some platforms have allowed children who originally stated themselves under 16 to later assert they were older, thereby undermining the law’s intent.
The findings demonstrate a significant escalation in the regulatory response, with the eSafety Commissioner moving beyond monitoring to active enforcement. The regulator has made clear that simply showing some children still maintain accounts is insufficient; platforms must rather furnish substantive proof that they have established robust systems and processes designed to prevent under-16s from opening accounts in the first place. This shift reflects the government’s determination to hold tech giants accountable, with potential penalties looming for companies that fail to meet the statutory obligations.
- Allowing previously banned users to re-verify their age and restore account access
- Enabling multiple tries at the same age assurance method with no repercussions
- Inadequate mechanisms to block accounts for under-16s from being established
- Insufficient complaint mechanisms for families and the wider community
- Lack of transparent data about regulatory measures and account deletions
The Scope of the Challenge
The substantial scale of social media usage amongst young Australians highlights the compliance challenge facing both the government and the platforms themselves. With millions of accounts already restricted or removed since the implementation of the ban, the figures paint a picture of extensive early non-compliance. The eSafety Commissioner’s findings indicate that the technical and procedural obstacles to implementing age restrictions have proven far more complex than anticipated, with platforms struggling to distinguish genuine age declarations from false claims. This intricacy has left enforcement authorities wrestling with the core issue of whether existing age verification systems are adequate to the task.
Beyond the operational challenges lies a broader concern about the willingness of platforms to place compliance ahead of user growth. Social media companies have long resisted stringent age verification measures, citing privacy concerns and the genuine difficulty of verifying age digitally. However, the regulatory report suggests that some platforms might not be demonstrating adequate commitment to deploy the infrastructure required by law. The move to active enforcement represents a pivotal moment: either platforms will significantly enhance their compliance infrastructure, or they risk facing substantial fines that could transform their operations in Australia and potentially influence regulatory approaches internationally.
What the Numbers Reveal
In the first month following the ban’s introduction, Australian officials reported that 4.7 million accounts had been limited or removed. Whilst this number initially appeared to show compliance achievement, further investigation reveals a more layered picture. The substantial number of account removals indicates that many under-16s had successfully created accounts in the initial stages, revealing that preventive controls were insufficient. Additionally, the data raises questions about whether removed accounts constitute real regulation or merely users deleting their pages voluntarily in reaction to the updated rules.
The restricted transparency regarding these figures has disappointed independent observers seeking to assess the ban’s actual effectiveness. Platforms have provided scant details about their compliance procedures, effectiveness metrics, or the characteristics of deleted profiles. This opacity makes it difficult for regulators and the public to determine whether the ban is working as intended or whether younger users are simply finding other methods to use social media. The Commissioner’s push for comprehensive proof of consistent enforcement practices reflects growing frustration with platforms’ reluctance to provide complete details.
Industry Response and Pushback
The social media giants have responded to the regulator’s enforcement action with a mixture of compliance assurances and scepticism about the ban’s practicality. Meta, which operates Facebook and Instagram, emphasised its commitment to complying with Australian law whilst at the same time contending that accurate age determination continues to be a significant industry-wide challenge. The company has advocated for a different approach, suggesting that robust age verification and parental approval mechanisms put in place at the application store level would be more efficient than enforcement at the platform level. This position reflects broader industry concerns that the existing regulatory system places an impractical burden on separate platforms.
Snap, the creator of Snapchat, has adopted a more assertive public position, stating that it had locked 450,000 accounts since the ban took effect and asserting it continues to suspend additional accounts each day. However, sector analysts question whether such figures demonstrate genuine compliance or simply represent reactive account management. The core conflict between platforms’ business models—which traditionally depended on maximising user engagement and expansion—and the statutory obligation to actively exclude an entire age demographic persists unaddressed. Companies have consistently opposed stringent age verification, pointing to privacy issues and technical constraints, creating a standoff between authorities and platforms over who carries responsibility for implementation.
- Meta maintains age verification ought to take place at app store level rather than on individual platforms
- Snap claims to have locked 450,000 user accounts following the ban’s implementation in December
- Industry groups point to privacy concerns and technical challenges as impediments to effective age verification
- Platforms contend they are doing their best whilst challenging the ban’s general effectiveness
More Extensive Inquiries Regarding the Ban’s Impact
As Australia’s under-16 social media ban moves into its enforcement phase, key concerns persist about whether the law will achieve its intended goals or merely push young users towards less regulated platforms. The regulator’s first compliance report reveals that following implementation, significant loopholes remain—children keep discovering ways to bypass age verification systems, and platforms have struggled to prevent new underage accounts from being established. Critics contend that the ban’s success depends not merely on regulatory oversight but on whether young people will truly leave major social networks or simply migrate to alternative services, encrypted messaging applications, or virtual private networks designed to conceal their age and location.
The ban’s international ramifications add another layer of complexity to assessments of its impact. Countries such as the United Kingdom, Canada, and multiple European countries are watching Australia’s experiment closely, considering similar legislation for their own populations. If the ban does not successfully reduce children’s online activity or fails to protect them from damaging material, it could weaken the case for equivalent legislation elsewhere. Conversely, if enforcement becomes sufficiently rigorous to effectively limit underage usage, it may encourage other governments to implement similar strategies. The conclusion will likely influence global regulatory trends for the foreseeable future, making Australia’s regulatory efforts scrutinised far beyond its borders.
Those Who Profit and Those Who Suffer
Mental health campaigners and organisations focused on child safety have backed the ban as a essential measure to counter algorithmic manipulation and contact with harmful content. Parents and educators contend that removing young Australians platforms built to maximise engagement could reduce anxiety, improve sleep patterns, and decrease exposure to cyberbullying. Tech companies’ own research has recognised the mental health risks associated with social media use amongst adolescents, adding weight to these concerns. However, the ban also eliminates valid applications of social media for young people—keeping friendships alive, obtaining educational material, and engaging with online communities around common interests. The regulatory framework assumes harm outweighs benefit, a calculation that some young people and their families challenge.
The ban’s practical impact goes further than individual users to affect content creators, small businesses, and community organisations that rely on social media platforms. Young people who might have taken up creative careers through platforms like TikTok or Instagram now confront legal barriers to participation. Small Australian businesses that are dependent on social media marketing no longer reach younger demographic audiences. Community groups, charities, and educational organisations struggle to reach young people through channels they previously used effectively. Meanwhile, the ban unexpectedly favours large technology companies with resources to develop age verification infrastructure, potentially strengthening their market dominance rather than reducing it. These unintended consequences suggest the ban’s effects extend far beyond the simple goal of child protection.
What Happens Next for Enforcement
Australia’s eSafety Commissioner has signalled a notable transition from inactive oversight to proactive action, marking a key milestone in the rollout of the under-16 ban. The authority will now collect data to ascertain whether companies have failed to take “reasonable steps” to block minors from using, a statutory benchmark that surpasses simply recording that children remain on these systems. This method necessitates tangible verification that companies have implemented proper safeguards and protocols intended to prevent minors. The regulatory body has stated it will conduct enquiries methodically, developing arguments that could lead to considerable sanctions for breach of requirements. This move from observation to action reveals growing frustration with the companies’ present approach and indicates that willing participation by itself is insufficient.
The enforcement phase highlights critical issues about the appropriateness of fines and the practical mechanisms for ensuring platform accountability. Australia’s legislation offers regulatory tools, but their success relies on the eSafety Commissioner’s willingness to pursue regulatory enforcement and the platforms’ capability to adjust meaningfully. Overseas authorities, especially regulators in the Britain and Europe, will closely monitor Australia’s implementation tactics and consequences. A robust enforcement effort could establish a blueprint for other nations evaluating equivalent prohibitions, whilst shortcomings might undermine the overall legislative structure. The forthcoming period will determine whether Australia’s innovative statutory framework delivers real safeguards for young people or becomes largely performative in its effect.
