Social media age ban only the first step toward safer online spaces


Andrew Black
Contributor

Unless you work in public policy, you may not be aware of the many discussions happening about protecting Australian kids on the internet beyond just the upcoming social media ban for under-16s, set to be enforced by the end of this year.

Although important, the legislation to stop children from opening accounts on major platforms like TikTok and Instagram is far from the only reform on the table.

Right now, a wave of regulatory change is poised to reshape the digital environment for young people in the country, from gaming to educational apps. Alongside the age ban, industry codes designed to restrict children’s access to online pornography and other harmful content are under review and expected to be finalised by the eSafety Commissioner in late 2025.

Meanwhile, the government is planning to legislate a Digital Duty of Care. This framework will legally oblige digital platforms to take reasonable steps to prevent harm to any users, particularly kids, shifting the focus from reactive to proactive, systemic safety.

Connect ID managing director Andrew Black

The Children’s Online Privacy Code, currently under development by the Office of the Australian Information Commissioner (OAIC), will introduce in 2026 strict requirements for how platforms collect, use, and store children’s data.

Together, these reforms reflect a growing understanding that online safety can’t begin and end with a social media ban. It must follow children across every digital space they use, from games and learning platforms to streaming services and everyday apps. And it must be consistent, enforceable, and designed around how kids engage with technology today.

Fragmented use, inconsistent protection

Despite screen time guidelines from the Australian Department of Health, which recommend no more than two hours a day for most children, research shows that only 15 per cent actually meet them. Globally, most children spend up to 4 hours online outside school each day. The reality is that kids are on a screen for so much time now that safety tools and guidelines must reflect that.

Most young people navigate online spaces with ease. One moment they are chatting in a game, the next they are watching videos or working through a school app.

This kind of digital fluency is the new norm, but platform safeguards have not evolved at the same speed.

Inappropriate content, unwanted contact and poor data protections remain common, on platforms outside the social media spotlight.

Parents around the world are doing their best to protect children from harm with around 80 per cent of parents regularly discuss online risks, and 76 per cent set limits on device use , but they often lack consistent and supportive tools across the platforms their kids use.

Age assurance as a safer default

One of the clearest ways to support safer online experiences is through smarter age assurance, an umbrella term covering technologies used to determine a user’s age.

Whether to restrict or limit access for young users, or even keep adults out of children’s spaces, age assurance allows platforms to design more mindful experiences. This can include limiting features, adjusting content exposure, or enabling parental controls, all without relying on intrusive checks or collecting unnecessary data.

What is needed is an approach that is simple, privacy-first, and based on consent. Age verification should not require users to upload documents or share excessive sensitive information, especially when it comes to young individuals. It should enable all users, or parents where appropriate, to verify just their age for specific digital access.

The technology already exists

This is not a hypothetical solution. Some sectors are already putting it into practice using digital identity solutions like ConnectID, which allows platforms to verify an individual’s age by connecting to trusted sources such as their bank.

Personal details are never stored or seen by anyone involved. After the user gives their consent for the verification, platforms receive only the information they need, like confirmation that someone is over or under a certain age threshold.

In the gaming space, some developers are applying this model to adjust features or activate parental controls in real time. In some cases, for example, age defines if kids have access to chats or can see specific content.

While not yet widespread, these examples show it is possible to create safe and more tailored digital environments without sacrificing privacy or ease of use.

What happens next matters most

The social media ban is a major step, but it cannot be where the conversation ends. Children spend time across a wide variety of digital platforms, and safety measures must follow them wherever they go.

Australia has both the tools and the momentum to lead in this space. What we need now is a clear framework for online protections, backed by clear standards and guidance to help platforms apply age assurance in a way that reflects how children actually use the internet.

Most importantly, this approach should ensure inclusion and choice, so every child and family can access and benefit from safe online experiences. It should also respect the choices parents and guardians make for their children’s digital lives. At the same time, platforms need clear and consistent guidelines that provide a reliable path forward for safety, privacy, and trust.

Young people will continue growing up online, the real question is whether the internet will evolve with them.

Andrew Black is the managing director of ConnectID at Australian Payment Plus

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories