Is banning social media for children under 16 in Australia practically possible?

Is banning social media for children under 16 in Australia practically possible?

Australia has approved a social media ban for children under 16 after an emotive debate that gripped the nation, setting a benchmark for jurisdictions worldwide with one of the toughest regulations targeting Big Tech.

According to the new law, tech giants like Instagram and Facebook owner Meta Platforms, as well as TikTok, must prevent minors in Australia from logging into social media sites and apps, or face fines of up to A$49.5 million ($32 million). A trial of enforcement methods will begin in January, with the ban set to take effect in a year.

‘I will still use it, secretly’

Citizens have reacted to the development in various ways, with a mix of anger and relief. Some called the world-leading ban a “great idea,” while others considered it an imposition. An 11-year-old, Emma Wakefield, quoted by Reuters, said, “I feel like I still will use it, just secretly get in.”

On the other hand, tech giants argue that the ban could push young people to “darker corners of the internet.” A spokesperson for TikTok, which is hugely popular with teens, told Reuters that the process had been rushed and could potentially put children at greater risk.

Also Read: Australia’s social media ban for children under 16: How it stacks up globally

Can the ban be practically enforced?

As the 11-year-old indicated, users could always lie about their age, log in via older people’s devices, or find other ways around the restrictions. This raises the question of whether the ban is practically enforceable. CNBC-TV18 reached out to global experts to explore this concern.

‘Challenges to the ban exist, but the government passed the law anyway’

Before experts weighed in, Australian PM Anthony Albanese addressed parents, acknowledging the challenges in implementation: “We don’t argue that its implementation will be perfect, just like the alcohol ban for under-18s doesn’t mean that someone under 18 never has access, but we know that it’s the right thing to do.”

Lizzie O’Shea, co-founder and chair of the Australian digital rights advocacy group Digital Rights Watch, told CNBC-TV18 that it remains unclear how effective the ban will be, as many practical implementation questions remain unanswered.

“There will be workarounds, and the type of age assurance that can be used is not mandated. It will depend on the quality of technology and the position taken by platforms, among other factors,” she said.

O’Shea, who is also a principal lawyer and human rights advocate, noted that VPNs could facilitate workarounds, which the government likely anticipated but chose not to alter its course.

‘Tech-savvy kids can bypass curbs, no system is foolproof’

Kalindhi Bhatia, Partner at Delhi-based BTG Advaya, explained that many children are incredibly tech-savvy and can bypass restrictions by using VPNs, lying about their age, or logging into another person’s account (who is over 16).

She suggested that for the ban to be effective, social media platforms would need to implement sophisticated age verification systems. This could include requiring users to upload government-issued IDs during account creation, verifying those IDs, or using advanced AI tools to analyze user behavior and detect activity resembling that of a minor under 16.

“Even then, no system is completely foolproof. It would require a coordinated effort between platforms, parents, and policymakers for any chance of success.”

Who will take responsibility? Parents, social media companies, or the government?

Globally, there is a growing trend to hold major social media companies accountable for regulating access to their platforms. Aasish Somasi, Associate Partner at SNG & Partners, noted that in Australia’s case, the new legislation will require platforms like TikTok, Snapchat, Facebook, and YouTube to take “reasonable steps” to prevent underage users from creating accounts. This shifts the responsibility to these platforms, compelling them to develop and implement effective age verification technologies.

Historically, these platforms have enjoyed immunity regarding user-generated content, as long as they comply with takedown requests. However, the new law threatens this immunity if they fail to enforce age restrictions. “This legal pressure is intended to motivate companies to invest their substantial resources into creating viable solutions for age verification and compliance,” Somasi said.

In practice, while the ban presents enforcement challenges—particularly due to the tech-savvy nature of today’s youth—leveraging the capabilities and resources of large social media platforms could enhance its effectiveness. The success of the initiative will largely depend on how companies implement and enforce age verification measures.

Is there a privacy concern?

When asked whether government-issued IDs for age verification would violate privacy, O’Shea from Digital Rights Watch explained that while government IDs could be used, they should not be the only method.

“This raises privacy concerns because it would involve social media platforms collecting government ID data. This could lead to the use of more privacy-invasive tools, like biometrics.”

Bhatia also noted that while age verification might involve biometrics or government-issued IDs, any such system must comply with strict privacy and data protection laws. Another approach could involve setting daily usage limits for minors to reduce screen time, or requiring platforms to implement parental consent mechanisms, where adults actively approve and monitor their child’s account activity.

What measures need to be taken to impose the social media ban?

Somasi listed five measures that could aid in the practical enforcement of the ban:

Advanced Age Verification: Platforms may require ID verification, biometric authentication, or integration with national databases to confirm users’ ages.

Device-Level Controls: Manufacturers could be encouraged to pre-install age-restricted access controls to block minors from accessing banned platforms.

Monitoring Algorithms: Social media companies could use AI to analyze user behavior and flag accounts that appear to be operated by minors, triggering further verification checks.

Parental Responsibility: Parents could be incentivized through education campaigns and tools to prevent their devices from being used to circumvent the ban.

Cross-Platform Collaboration: Platforms could work together to share flagged accounts and enforce age restrictions across apps.

Syed Modi International Badminton 2024: PV Sindhu, Lakshya Sen Clinch Singles Titles Previous post Syed Modi International Badminton 2024: PV Sindhu, Lakshya Sen Clinch Singles Titles
Transcript: Historian H.W. Brands on “Face the Nation with Margaret Brennan,” Dec. 1, 2024 Next post Transcript: Historian H.W. Brands on “Face the Nation with Margaret Brennan,” Dec. 1, 2024

Leave a Reply

Your email address will not be published. Required fields are marked *