France’s lower house of parliament has adopted a draft law banning access to social media platforms for children and teenagers under the age of 15, citing the need to protect young people’s mental and physical health.
The bill, backed by the government of President Emmanuel Macron, was approved late on Monday night with 130 votes in favour and 21 against and will now be examined by the Senate. If adopted in its final form, France would become the first European country to introduce a legally binding age threshold for access to social networking sites.
Following the vote, President Macron welcomed the outcome in a post on X, describing it as a “major step” in protecting young people online.
Interdire les réseaux sociaux aux moins de 15 ans : c’est ce que préconisent les scientifiques, c’est ce que demandent massivement les Français.
— Emmanuel Macron (@EmmanuelMacron) January 26, 2026
Après un travail fructueux avec le Gouvernement, l’Assemblée nationale vient de dire oui.
C’est une étape majeure.…
The proposed legislation targets popular social media platforms and forms part of a broader European and international debate on the impact of excessive screen time, online addiction and exposure to harmful content on minors.
Australia’s tougher stance on social media and minors
France’s move follows a more radical decision taken by Australia, which in November 2024 adopted legislation banning access to social media platforms for children under 16.
The Australian government said the decision was driven by growing evidence that social media platforms are harming children’s health and wellbeing. In particular, it cited design features that encourage prolonged screen time and promote content linked to anxiety, eating disorders, self-harm and violence.
A government-commissioned study in 2025 found that 96% of children aged 10 to 15 used social media and that seven in ten had been exposed to harmful content, including misogynistic and violent material, as well as posts promoting suicide and disordered eating.
The same research showed that one in seven children had experienced grooming-type behaviour from adults or older users, while more than half reported being victims of cyberbullying.
Platforms covered and exclusions
Ten platforms are currently included in the ban: Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit and the streaming platforms Kick and Twitch.
Authorities assess whether a service falls under the ban based on three criteria:
- whether its main or significant purpose is online social interaction,
- whether users can interact with others on the platform, and
- whether users are able to post content.
Services such as YouTube Kids, Google Classroom and WhatsApp were excluded, as they were not deemed to meet these thresholds. Children can also continue to view content on platforms that do not require an account.
Critics have argued that the scope should be wider, pointing to online gaming and social platforms such as Roblox and Discord. In November 2025, Roblox announced it would introduce age checks for some features, though it remains outside the current ban.
Enforcement and penalties
Children and parents face no penalties for breaching the ban. Instead, enforcement targets social media companies, which can be fined up to A$49.5 million for serious or repeated failures to comply.
Platforms are required to take “reasonable steps” to prevent underage access and must deploy multiple age-assurance technologies. These may include government-issued identification, facial or voice recognition, or behavioural “age inference” systems. Self-declaration by users or parental consent alone is not considered sufficient.
Meta, which owns Facebook, Instagram and Threads, began closing teenage accounts from early December, allowing users who were removed in error to appeal using government ID or video verification. The company said it blocked around 550,000 accounts in the initial phase. Snapchat introduced verification options using bank details, photo ID or selfies.
Criticism and privacy concerns
The ban has faced sustained criticism, including concerns that age-verification tools may wrongly block adults while failing to identify underage users. The government’s own analysis suggested facial recognition is least reliable when assessing teenagers.
Others questioned whether the fines would act as a real deterrent, noting that large platforms generate equivalent sums in revenue within hours. There have also been warnings that excluding dating sites, gaming platforms and AI chatbots undermines the policy’s effectiveness, particularly as some AI tools have been accused of encouraging self-harm or engaging in inappropriate conversations with minors.
Privacy campaigners raised alarms over the scale of personal data collection required to verify ages, especially in a country that has experienced several major data breaches. The government insists that the law includes strict safeguards, limiting data use to age verification only and requiring its destruction afterwards, with severe penalties for misuse.
How companies and users reacted
Social media companies strongly opposed the ban when it was announced in late 2024, arguing it would be difficult to enforce, easy to circumvent and risky for user privacy. Some warned it could push children towards less regulated corners of the internet.
Despite these objections, platforms moved to comply. TikTok and Snap said they disagreed with the policy but would follow the law. Reddit expressed “deep concerns” about free expression and privacy, while YouTube argued that the new rules could reduce safety by allowing children to access content without accounts and therefore without parental controls.
What changed after the ban took effect
In the days leading up to the ban on 10 December, thousands of Australians searched for alternative apps, with lesser-known platforms such as Lemon8, Yope and Coverstar briefly surging in downloads. Interest later declined, while the use of VPNs rose temporarily before returning to normal levels.
A month into the ban, teenagers reported mixed experiences. Some said they felt relieved and less pressured, while others said it had made little difference to their habits, admitting they continued to access platforms using accounts with false birthdates or shared profiles with parents.
As Australia tests the long-term impact of its decision, policymakers elsewhere are watching closely to see whether the ban meaningfully improves children’s online safety or simply reshapes how young people navigate the digital world.
Source: cna, bbc