A number of countries around the world, including the United Kingdom, are currently either in the process of – or contemplating – social media bans for children. What might the scope of a social media ban be? Why is this a legislative priority? What obligations do businesses have under existing data protection and online safety laws and what must they do to comply? And how might the law evolve?
Background
Australia got there first, implementing a ban under 16s on 10 December 2025, but others are not far behind. In January 2026, the National Assembly in France voted overwhelmingly in favour of a similar ban. In the days before the vote, President Macron made the case for the policy on French TV:
“Le cerveau de nos enfants et de nos adolescents n’est pas à vendre. Les émotions de nos enfants et de nos adolescents ne sont pas à vendre ou à manipuler, ni par les plateformes américaines, ni par les algorithmes chinois.”
("The brains of our children and our teenagers are not for sale. The emotions of our children and our teenagers are not for sale or to be manipulated, neither by American platforms nor by Chinese algorithms").
The UK hasn't yet decided whether to implement a social media ban, but is consulting on the idea. The consultation, launched on 2nd March 2026, addresses a broader range of topics related to ‘Growing Up in an Online World’. The consultation poses three key questions relating to a potential children’s social media ban:
- Would you support a legal requirement for social media services to have a minimum age of access?
- To what extent do you agree with the statement that “social media services should have a minimum age of access of at least 16 and should not be accessible to any children under that age?”
- Would you support a legal requirement for social media services to have a minimum age of lower than 16? If so, what age would you set it? (13, 14, 15, 16, other).
Policy considerations driving proposals for a ban
Some of the main reasons we have seen put forward:
- Social media platforms are addictive by design. As a result, children spend many hours using them each day. There is some evidence that heavy social media use is correlated with worse mental health.
- Platforms host content that is inappropriate for children (eg content that of a sexual, violent or politically extreme nature).
- The presence of children on these platforms raises the risk of grooming and harassment from predatory adults.
Some potential problems with the policy
Some of the more common arguments include:
- A ban may not be practically enforceable, as age assurance mechanisms can be circumvented with VPNs and fake IDs.
- Concern about children’s social media use is overblown and the positives of social media access outweigh the negatives.
- If mainstream sites are banned, children will migrate to unmoderated, anonymous, lower-profile sites where they will be at greater risk.
- A ban for children misses the point, as addictive design features are also harmful to adults.
Has the law not considered these problems until now?
It has, but perhaps not in a sufficiently holistic way. Many countries (including the UK) already have data protection laws that effectively require parents to give their consent for their children to set up a social media account. Online safety laws are also becoming increasingly important tools, requiring platforms to do more to police the content that they host.
What objective does a children’s social media ban aim to achieve?
While the details will vary depending on the jurisdiction, many of the proposals share one core objective: To prevent children under a certain age from holding a social media account.
What age? What counts as social media? The answers to those questions will vary. As discussed above, Australia went for 16. France is going for 15. We’re not yet sure what the UK proposal will be.
Don’t most social media platforms already stop children under a certain age from having an account?
Yes. Many platforms set a minimum age of 13.
What forms of social media account?
This is a contentious subject.
The definition of ‘age-restricted social media account’ in the relevant Australian legislation is complex. The key requirement is that the main or significant purpose of the platform is to enable ‘online social interaction’.
How do you decide what kind of platform is in scope based on that?
With difficulty. To help solve this problem, the Australian legislation contains the option for regulators to specify social media platforms it considers are in/out of scope. At the time of writing, regulators have made the following decisions:
- In scope: Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, Twitch, X (formerly Twitter) and YouTube.
- Out of scope: Discord, GitHub, Google Classroom, LEGO Play, Facebook Messenger, Pinterest, Roblox, Steam and Steam Chat, WhatsApp and YouTube Kids.
Doesn’t the definition of social media risk catching services that are considered by most to be beneficial for children?
Yes. To help solve this problem, Australia built on their generic definition of ‘age-restricted social media platform’ by creating various exemptions, including services that have the sole or primary purpose of:
- Enabling end users to communicate by means of messaging, email, voice calling or video calling
- Enabling end users to play online games with other end users; or
- Supporting the education or health of end users
There is also an exemption for services that have a significant purpose of facilitating communication between educational institutions and students or students’ families or providers of health care and people using those providers’ services.
What would the data protection considerations be for organisations implementing a children’s social media ban?
Setting up a social media account usually involves sharing personal data with the platform in question (eg your name and email address, perhaps some photographs). There are already legal restrictions that theoretically restrict the ability of children to set up accounts. For example, Article 8 of the EU GDPR says that processing the personal data of under 16s is only lawful “if and to the extent that consent is given or authorised by the holder of parental responsibility.” The EU GDPR allowed Member States to opt for a lower age, provided it was not below 13.
13 is the age that the UK opted for when it implemented the GDPR. This is unchanged since Brexit. This means that it is unlawful for a social media platform to allow a child under the age of 13 in the UK to have an account without making reasonable efforts to verify consent from those with parental responsibility. This sort of legal provision is sometimes referred to as the ‘digital age of consent’. Given that many of the major platforms already have terms and conditions that set 13 as the minimum age for opening an account, a children’s social media ban would not be likely to change UK data protection law all that much.
It's worth noting that the digital age of consent applies to a broader range of ‘information society services’ besides social media. This term covers most online services that are provided for commercial purposes, including websites, apps, online gaming, search engines, and online marketplaces. These sorts of service often need to seek consent to use people’s data for activities such as profiling for advertising purposes.
Businesses that operate services of this nature can expect greater scrutiny of the measures they have in place to verify:
- The age of their users
- That parental consent has been obtained for the processing of personal data of under 13s
Does the regulator enforce the digital age of consent, when platforms fail to do so themselves?
Rarely, historically speaking, but this is changing. On 5 February 2025, the UK’s data protection regulator, the ICO, issued MediaLab.AI, Inc. (owner of image sharing and hosting platform Imgur) £247,590 for doing too little to ensure parental consent was obtained. On 24 February 2026, the ICO announced a £14.47m fine for Reddit. The UK Information Commissioner, reacting to the fine, said:
“Companies operating online services likely to be accessed by children have a responsibility to protect those children by ensuring they’re not exposed to risks through the way their data is used. To do this, they need to be confident they know the age of their users and have appropriate, effective age assurance measures in place. Reddit failed to meet these expectations… relying on users to declare their age themselves is not enough when children may be at risk and we are focusing now on companies that are primarily using this method. I therefore strongly encourage industry to take note, reflect on their practices and urgently make any necessary improvements to their platforms.”
Ok, so platforms will have to get serious about age checks. Doesn’t that mean they will collect far more personal data?
Yes. One of the benefits of a ‘light-touch’ approach to age checks (eg self-certifying by ticking a box) is that there was minimal need to collect personal data. A children’s social media ban would mean platforms need obtain and process more personal data as part of the registration process. That may involve:
- Uploading Government issued ID documents, such as a passport or a driving licence
- A facial age estimation process (based on a video, or a photograph); or
- Sharing bank or payment card details
All of these mechanisms have drawbacks. Some are ineffective. The methods that work better have a trade-off in terms of privacy which adults (and adolescents) may not be willing to make. This may be for reasons of principle (why should I have to share my personal information with you to use your service?), or pragmatism (if I provide my personal information to you, how do I know you will keep it safe?).
As platforms collect more information, the level of risk rises, meaning that additional technical and organisational security measures will be required to comply with Article 32 of the UK GDPR.
What other laws are relevant here?
In the UK, the Online Safety Act 2023 already goes some way towards dealing with the problems identified by those who favour a children’s social media ban. Social media platforms can be subject to a variety of duties related to children’s safety under the Act, including requirements to:
- Carry out an assessment of the risk of children accessing certain forms of content which the Act defines as harmful (eg pornography, content promoting self harm or eating disorders).
- Moderate the platform for harmful content and take steps to prevent children from encountering it, for example by implementing ‘highly effective’ age verification* or age estimation* (or both) [Ofcom, the regulator tasked with enforcing the Online Safety Act, has published guidance on what is/isn’t considered to be ‘highly effective’].
*Age verification: requires a user to prove their exact age (or date of birth), usually through official documentation or other hard evidence checks.
*Age estimation: any method used to estimate, infer, or confirm whether a user is likely above or below a certain age — without requiring formal proof of identity.
How might the law evolve in the UK to address concerns about children’s social media use?
Reaching for my crystal ball, three options present themselves:
- Amending the UK GDPR to say that parental consent is required for children under 16 (as opposed to 13) from holding a social media account. Of course some parents might choose to allow their children to access the platforms anyway, which others would perceive to be a problem.
- Amending the Online Safety Act (along the lines of what Australia have done) to add a definition for ‘age restricted social media platform’ (or similar).
- Create a standalone law to address children’s social media use.
In our view, option 2 is most likely.
What next?
Although the exact legal mechanisms for implementation are still up for debate, UK and European leaders are increasingly deciding that a social media ban for children is the way forward. The debate is politically charged. The Guardian reports comments from Spanish Prime Minister Pedro Sanchez that social media is “a failed state, where laws are ignored and crimes are tolerated”. Elon Musk fired back by describing Sanchez as “a tyrant and a traitor to the people of Spain.”
The strength of feeling on both sides of the argument makes it difficult for platforms charged with finding a way to implement a children’s social media ban. They must somehow avoid:
- Imposing more restrictions than the law requires
- Irritating their adult customers with new forms of age check
- Incurring the displeasure of the regulators which, for political reasons, are taking a keen interest in compliance
What does this mean for business?
Compliance with your legal obligations towards children under online safety and data protection laws will become increasingly important. Have you got a highly effective form of age verification or age estimation in place? Where you find yourself wanting or needing to collect more personal data from your users, do you have a plan for how to do this in a way that will ensure compliance with the law, minimise the risk of data becoming compromised and retain the trust of your user base?
The ICO’s guidance on age assurance and Ofcom’s guidance on implementing ‘highly effective’ age assurance are both valuable resources for anyone tasked with thinking through these issues on behalf of their organisation.
If you would like to discuss any of the issues raised in this article, please get in touch.
Our content explained
Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.