

In a major move, Australia has banned the use of social media platforms including TikTok, X, Facebook, Instagram, YouTube, Snapchat, and Threads by anyone under the age of 16.
According to the directive, children under 16 will be unable to create new accounts, while existing profiles will be deactivated. The decision has sparked global debate and drawn criticism from social media companies, while other countries are closely monitoring the situation before adopting similar models. Going by the announcement, 10 major platforms are currently covered by the ban: Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit, and the streaming platforms Kick and Twitch.
Why the ban?
According to the Australian government, the ban is meant to address concerns over the amount of time young people spend on social media, which is believed to be harming their health and wellbeing. The government evaluates platforms based on whether their main or significant purpose is to enable online social interaction between users, whether users can interact with some or all other users, and whether they allow users to post content.
Services such as YouTube Kids, Google Classroom, and WhatsApp are not included, as they are not considered to meet these criteria.
Social media companies have criticised the move, saying the ban will be difficult to enforce, easy to bypass, time-consuming for users, and could raise privacy concerns. They have also warned that it may push children toward unsafe areas of the internet and reduce opportunities for social interaction. Meta, which owns Facebook, Instagram, and Threads, began closing teen
accounts on 4 December. The company said users who are removed by mistake can verify their age using government-issued identification or a video selfie. Snapchat has said users can verify their age using bank accounts, photo ID, or selfies.
Onus on companies to ensure compliance
Under the new rules, children and parents will not be penalised for breaking the ban. Instead, social media companies face fines of up to A$49.5 million (US$32 million) for serious or repeated violations. The government has said platforms must take “reasonable steps” to keep children off their services, using multiple age-verification methods. These could include government ID checks, facial or voice recognition, or “age inference,” which estimates user’s age based on online behaviour. Platforms cannot rely only on self-declared ages or parental consent.