Governments across the world are increasingly moving to restrict children’s access to social media, signalling a global shift in how digital platforms are regulated to protect youth well-being. This movement is a response to mounting concerns over the negative effects of social media, including impacts on mental health, exposure to harmful content, and the manipulative nature of commercial interests and algorithms. Denmark’s recent proposal is one of the most sweeping measures in Europe, following similar, established legislation in Australia and the UK.
Denmark’s Proposed Social Media Ban: A New European Standard
The new plan announced today by the Ministry of Digitalisation, would prohibit access to “certain” social media for anyone under 15 years old.
- Minimum Age: A new legal age limit of 15 for social media access
- Parental Consent Exception: The proposal includes a provision that would allow parents to give consent for their children to access social media from the age of 13, subject to a “specific assessment”
- Motivation: The government coalition cites disruption to children’s sleep, concentration, and peace, as well as the pressure from digital relationships and influence of commercial interests and harmful content. They state the need for society to step in where parents, teachers and educators cannot stop the development alone.
The push for online safety extends beyond national laws. The European Commission (the EU’s executive body) has already issued guidelines to strengthen protections for minors and even tested a prototype for an age-verification app. Against this backdrop, Danish lawmaker Rasmus Lund-Nielsen of the Moderates party labelled the current situation, saying social media has become “the Wild West.”
Lund-Nielsen noted how widespread access is: “Every other 10-year-old is on TikTok, but now we are setting a limit.” He dismissed the idea that the danger is only a parental concern, stating “it is not just a parental responsibility to protect children from seeing Charlie Kirk being shot in the throat on social media.” Citing youth health statistics, he concluded that “society must step in and take responsibility” and that the ultimate goal is simply: “Now we are giving children their childhood back.”
Australia’s Landmark Ban: Setting the Precedent
Australia, in November 2024, enacted a landmark law setting the minimum age for creating or maintaining a social media account at 16.

Image: Prime Minister Keir Starmer and Australian Prime Minister Anthony Albanese – Simon Dawson / No 10 Downing Street
The law saw a strict minimum age of 16 for all major social media platforms enforced, including Tiktok, Facebook, Instagram, Snapchat, X, Youtube, Reddit and Kick. Platforms also faced potential fines of up to A$50 million (approximately £24.6 million) for systematic failures to prevent children under 16 from holding accounts. The law mandates platforms use age-assurance technologies such as facial analysis or ID checks, but not solely self-declaration to verify user’s ages and deactivate existing underage accounts.
The Australian government has made exemptions for essential services like educational or health apps (e.g Google Classroom and Microsoft Teams) as well as messaging apps like Whatsapp.
The UK’s Online Safety Act: Focusing on Duty of Care and Banning Harmful Content
The UK’s Online Safety Act 2023, takes a different approach by imposing a duty of care on platforms to protect children, rather than a full-on outright age-based ban.
The Act requires platforms to implement systems and processes to protect children from illegal and “priority” harmful content.

Image: Prime Minister Keir Starmer meets staff as he visits the North Bristol Community Diagnostics Centre – Simon Dawson No 10 Downing Street
Services likely to be accessed by children must conduct children’s risk assessments. Platforms that host content deemed primary priority harmful content, including pornography, content promoting self-harm, suicide, or eating disorder, must use highly effective age assurance (like facial scans or photo ID checks) to prevent children from accessing it.
The law also requires platforms to ensure their algorithms do not promote or recommend harmful content to children, and to provide children with age-appropriate experiences and accessible reporting tools
These three countries, Denmark, Australia, and the UK, represent a trend of governments worldwide intervening to protect children from what is seen as a digitally unregulated and potentially developmentally damaging environment.
Featured Image via Danish PM Mette Frederiksen / Party of European Socialists


