Australia’s online watchdog has criticised the world’s biggest social platforms of not adequately implementing the country’s ban on under-16s using their platforms, despite legislation that came into force in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including allowing banned users to repeatedly attempt age verification and inadequate safeguards to prevent new accounts. In its first compliance report since the prohibition came into force, the regulator identified multiple shortcomings and has now shifted from observation to active enforcement, warning that platforms must show they have put in place “appropriate systems and processes” to prevent children under 16 from accessing their services.
Non-compliance Issues Exposed in Initial Significant Review
Australia’s eSafety Commissioner has documented a worrying pattern of non-compliance amongst the world’s biggest social media platforms in her first formal review following the ban took effect on 10 December. The report demonstrates that Meta, Snap, TikTok, YouTube and Snapchat have collectively neglected to establish sufficient safeguards to stop minors from using their services. Julie Inman Grant raised significant concerns about systemic weaknesses in age verification processes, highlighting that some platforms have permitted children who originally stated themselves under 16 to later assert they were older, thereby undermining the law’s intent.
The findings indicate a significant escalation in the regulatory action, with the eSafety Commissioner moving beyond monitoring to active enforcement. The regulator has emphasised that simply showing some children still hold accounts is insufficient; platforms must instead furnish substantive proof that they have put in place comprehensive systems and procedures designed to prevent under-16s from opening accounts in the first place. This shift reflects the government’s determination to hold tech giants responsible, with potential penalties looming for companies that fail to meet the legal requirements.
- Allowing previously banned users to confirm again their age and regain account access
- Permitting repeated attempts at the identical verification process without consequences
- Insufficient safeguards to block accounts for under-16s from being opened
- Inadequate reporting tools for families and the wider community
- Lack of publicly available information about enforcement efforts and account deletions
The Magnitude of the Problem
The substantial scale of social media usage amongst young Australians underscores the compliance challenge confronting both the government and the platforms in question. With millions of accounts already removed or restricted since the implementation of the ban, the figures paint a picture of widespread initial non-compliance. The eSafety Commissioner’s findings indicate that the operational and technical barriers to enforcing age restrictions have proven far more complex than anticipated, with platforms having difficulty to differentiate authentic age confirmations from false claims. This complexity has left enforcement authorities wrestling with the fundamental question of whether current age verification technologies are adequate to the task.
Beyond the technical obstacles lies a wider issue about the readiness of companies to prioritise compliance over user growth. Social media companies have long resisted strict identity verification requirements, citing data protection worries and the real challenge of verifying age digitally. However, the Commissioner’s report suggests that some platforms may not be making adequate commitment to deploy the infrastructure required by law. The move to active enforcement represents a critical juncture: either platforms will significantly enhance their regulatory systems, or they stand to incur substantial fines that could transform their operations in Australia and possibly affect compliance frameworks internationally.
What the Figures Indicate
In the first month after the ban’s launch, Australian regulators stated that 4.7 million accounts had been limited or deleted. Whilst this statistic initially appeared to show compliance achievement, further investigation reveals a more layered picture. The considerable quantity of account deletions implies that many under-16s had managed to establish accounts in the initial stages, revealing that preventative measures were inadequate. Additionally, the data prompts inquiry about whether removed accounts constitute genuine enforcement or just users removing their pages voluntarily in response to the updated rules.
The restricted transparency surrounding these figures has frustrated independent observers trying to determine the ban’s genuine effectiveness. Platforms have provided scant details about their enforcement methodologies, effectiveness metrics, or the profile of suspended accounts. This opacity makes it hard for regulators and the wider public to assess whether the ban is operating as planned or whether younger users are merely discovering different means to reach social media. The Commissioner’s insistence on thorough documentation of consistent enforcement practices reflects growing frustration with platforms’ reluctance to provide comprehensive data.
Sector Reaction and Opposition
The major tech platforms have responded to the regulator’s enforcement action with a mixture of compliance assurances and doubts regarding the practical feasibility of the ban. Meta, which runs Facebook and Instagram, emphasised its dedication to adhering to Australian law whilst at the same time contending that precise age verification remains a major challenge across the industry. The company has called for a alternative strategy, suggesting that robust age verification and parental approval mechanisms implemented at the application store level would be more effective than enforcement at the platform level. This stance reflects broader industry concerns that the existing regulatory system places an impractical burden on separate platforms.
Snap, the creator of Snapchat, has taken a more proactive public stance, announcing that it had locked 450,000 accounts since the ban took effect and claiming to continue locking more daily. However, sector analysts dispute whether such figures reflect authentic adherence or merely reactive account management. The fundamental tension between platforms’ commercial structures—which historically relied on maximising user engagement and growth—and the statutory obligation to actively exclude an entire age demographic persists unaddressed. Companies have consistently opposed stringent age verification, citing privacy issues and technical constraints, establishing an impasse between regulators and platforms over who carries responsibility for execution.
- Meta argues age verification should occur at app store level instead of on individual platforms
- Snap claims to have locked 450,000 accounts following the ban’s implementation in December
- Industry groups point to privacy issues and technical obstacles as impediments to effective age verification
- Platforms maintain they are doing their best whilst questioning the ban’s general effectiveness
Wider Considerations Concerning the Prohibition’s Efficacy
As Australia’s under-16 online platform ban enters its enforcement phase, key concerns remain about whether the legislation will accomplish its intended goals or merely drive young users towards less regulated platforms. The regulatory authority’s first compliance report reveals that despite months of implementation, substantial gaps exist—children keep discovering ways to circumvent age verification mechanisms, and platforms have had difficulty stop new underage accounts from being created. Critics contend that the ban’s effectiveness depends not merely on regulatory oversight but on whether young people will truly leave major social networks or simply shift towards alternative services, secure messaging apps, or VPNs designed to conceal their age and location.
The ban’s international ramifications contribute further complexity to assessments of its success. Countries including the United Kingdom, Canada, and several European nations are observing Australia’s initiative closely, evaluating similar regulatory measures for their respective populations. If the ban fails to reduce children’s online activity or fails to protect them from damaging material, it could undermine the case for similar measures elsewhere. Conversely, if regulation becomes sufficiently robust to genuinely restrict underage usage, it may encourage other governments to pursue similar approaches. The outcome will potentially determine worldwide regulatory patterns for years to come, making Australia’s enforcement efforts scrutinised far beyond its borders.
Those Who Profit and Who Is Disadvantaged
Mental health campaigners and organisations focused on child safety have backed the ban as a necessary intervention to counter algorithmic manipulation and exposure to harmful content. Parents and educators maintain that removing young Australians platforms designed to maximise engagement could reduce anxiety, improve sleep patterns, and reduce exposure to cyberbullying. Tech companies’ own research has acknowledged the mental health risks linked to social media use amongst adolescents, lending credibility to these concerns. However, the ban also removes valid applications of social media for young people—keeping friendships alive, obtaining educational material, and engaging with online communities around shared interests. The regulatory framework assumes harm outweighs benefit, a calculation that some young people and their families dispute.
The ban’s practical impact goes further than individual users to affect content creators, small businesses, and community organisations that rely on social media platforms. Young people who might have followed creative careers through platforms like TikTok or Instagram now face legal barriers to participation. Small Australian businesses that rely on social media marketing no longer reach younger demographic audiences. Community groups, charities, and educational organisations struggle to reach young people through channels they previously employed effectively. Meanwhile, the ban unexpectedly advantages large technology companies with resources to develop age verification infrastructure, arguably consolidating their market dominance rather than reducing it. These unexpected outcomes suggest the ban’s effects extend far beyond the simple goal of child protection.
What Follows for Regulatory Action
Australia’s eSafety Commissioner has signalled a significant shift from inactive oversight to direct intervention, marking a critical turning point in the implementation of the under-16 ban. The authority will now gather evidence to determine whether platforms have neglected to implement “reasonable steps” to block minors from using, a statutory benchmark that surpasses simply noting that children remain on these services. This strategy requires tangible verification that organisations have implemented suitable mechanisms and processes intended to prevent minors. The Commissioner’s office has signalled it will launch probes methodically, constructing evidence that could result in substantial penalties for failure to comply. This transition from observation to intervention demonstrates growing frustration with the companies’ present approach and suggests that willing participation on its own will not be enough.
The enforcement phase presents critical issues about the appropriateness of fines and the operational systems for ensuring platform accountability. Australia’s legislation delivers compliance mechanisms, but their effectiveness relies on the eSafety Commissioner’s readiness to undertake regulatory enforcement and the platforms’ capability to adjust meaningfully. Global regulators, notably regulators in the UK and EU, will carefully track Australia’s enforcement strategy and outcomes. A effective regulatory push could create a template for additional countries contemplating similar bans, whilst inadequate results might compromise the comprehensive regulatory system. The forthcoming period will prove crucial whether Australia’s pioneering regulatory approach produces genuine protection for young people or stays primarily ceremonial in its influence.
