Children and Social Media: Europe's New Age Verification Laws
Europe is leading the world in regulating children's access to social media. Across the continent, lawmakers have recognized that self-declaration of age, the method platforms have relied on for years, is fundamentally inadequate. A child who can click "I am over 13" is no more protected than one with no age gate at all. In response, a wave of new legislation is reshaping how platforms must verify the age of their users, creating genuine obligations backed by substantial penalties.
For parents, educators, and platform operators alike, understanding this rapidly evolving landscape is essential. This guide provides a comprehensive overview of the EU-level framework, the specific national laws that are already in effect, the technology behind age verification, and the privacy considerations that make this issue more complex than it might first appear.
The EU Digital Services Act: The Foundation
The Digital Services Act (DSA), Regulation (EU) 2022/2065, which became fully applicable on 17 February 2024, is the cornerstone of EU efforts to protect children online. While the DSA does not set a specific age for social media access (that is left to GDPR and national laws), it creates powerful obligations for platforms regarding minors.
Article 28 of the DSA requires online platforms to implement "appropriate and proportionate measures to ensure a high level of privacy, safety, and security of minors" on their services. This is not a suggestion; it is a binding legal obligation enforceable by the European Commission for Very Large Online Platforms (VLOPs) and by national Digital Services Coordinators for smaller platforms.
The DSA's specific requirements for platforms accessible to minors include:
- No profiling-based advertising to minors: Article 28(2) prohibits platforms from presenting advertising based on profiling when they are aware "with reasonable certainty" that the user is a minor. This effectively forces platforms to determine whether users are minors.
- Risk assessments: VLOPs must conduct annual systemic risk assessments (Article 34) that specifically evaluate risks to minors, including the impact of their algorithmic recommendation systems on children's well-being.
- Mitigation measures: Platforms must implement risk mitigation measures (Article 35), which can include age verification tools, age-appropriate design features, and content moderation standards tailored to minors.
- Transparency: Platforms must be transparent about the measures they take to protect minors, and their terms of service must be written in language that children can understand.
The enforcement mechanisms are formidable. The European Commission can impose fines of up to 6% of a VLOP's global annual turnover for non-compliance. For a company like Meta (which operates Facebook, Instagram, and WhatsApp), this could mean fines exceeding EUR 7 billion.
GDPR and the Digital Age of Consent
The General Data Protection Regulation (GDPR) adds another crucial layer. Article 8 establishes that processing a child's personal data based on consent is only lawful if the child has reached the "digital age of consent." The GDPR sets this at 16 years by default but allows member states to lower it to as low as 13 years. The result is a patchwork across Europe:
- 16 years: Germany, Luxembourg, Netherlands, Romania, Croatia, Hungary
- 15 years: France, Czech Republic, Greece, Slovenia
- 14 years: Austria, Bulgaria, Italy, Lithuania, Spain
- 13 years: Belgium, Denmark, Estonia, Finland, Latvia, Malta, Poland, Portugal, Sweden, Ireland
Below these ages, parental consent is required for the processing of a child's personal data. The challenge, of course, is verifying both the child's age and the parent's identity and consent, which brings us to the national laws that are attempting to solve this problem.
Country-by-Country: National Age Verification Laws
France: The Pioneer
France has been the most aggressive EU member state in pursuing enforceable age verification. The Loi visant a garantir le respect du droit a l'image des enfants (Law No. 2024-120), adopted in June 2024, requires social media platforms to verify the age of all users and obtain parental consent for users under 15. Platforms that fail to implement "technically reliable" age verification face fines of up to 1% of global revenue.
France's national data protection authority, CNIL, has developed a reference framework for age verification that promotes a "double anonymity" approach. In this model, a trusted third-party service verifies the user's age (for example, by checking an ID document), then provides a simple "over/under age threshold" token to the platform. The platform never sees the ID document, and the verification provider never knows which platform the user is accessing. This is designed to protect privacy while enabling genuine age verification.
Additionally, France's ARCOM (the audiovisual and digital regulator) has been empowered to order internet service providers to block access to platforms that persistently refuse to implement age verification. In October 2024, ARCOM issued its first blocking orders against adult content websites, signaling that enforcement is real.
Spain: Protecting the Under-16s
Spain's Organic Law on the Protection of Children and Adolescents in Digital Environments, passed in 2024, sets the minimum age for social media use at 16 (aligned with Spain's GDPR digital age of consent, which was raised from 14). The law requires platforms to implement "effective" age verification mechanisms and to provide parents with tools to manage their children's online activity.
The Spanish Agency for Data Protection (AEPD) has developed a pilot verification system that uses mobile phone operators as trusted intermediaries. When a user attempts to register on a social media platform, the platform can request an age verification check through the user's mobile carrier, which already holds verified identity data from the SIM registration process. The carrier returns only a binary yes/no response regarding whether the user meets the age threshold.
Netherlands: Self-Regulation with Teeth
The Netherlands has taken a somewhat different approach, relying on its existing robust data protection framework and the strong enforcement culture of the Dutch Data Protection Authority (Autoriteit Persoonsgegevens, AP). The AP has taken the position that platforms must use "reasonable efforts" to verify the age of their users, and has imposed significant fines for failures. In 2024, the AP fined TikTok EUR 750,000 for failing to provide adequate privacy protections for Dutch children, and indicated that future fines would be substantially higher.
The Netherlands has also pioneered the iDIN digital identification system, linked to the Dutch banking infrastructure, which can be used for age verification. Since virtually all Dutch adults have a bank account with verified identity data, iDIN can provide high-assurance age verification with minimal friction for adult users.
Germany: The Co-Regulation Model
Germany's approach to child online protection is anchored in the Interstate Treaty on the Protection of Minors in the Media (Jugendmedienschutz-Staatsvertrag, JMStV) and the Youth Protection Act (Jugendschutzgesetz, JuSchG). The updated JuSchG, amended in 2021, requires platforms to implement "appropriate and effective structural precautions" (angemessene und wirksame strukturelle Vorsorgemanahmen) to protect minors.
The Federal Agency for the Protection of Children and Young People in the Media (BzKJ) can classify platforms as "impeding development" and impose obligations including mandatory age verification. Germany's system is notable for its use of recognized age verification providers (e.g., those certified by the Commission for the Protection of Minors, KJM), which must meet strict standards for reliability and data protection.
How Age Verification Works: The Technology
Age verification technology has evolved significantly beyond the simple "enter your date of birth" checkbox. Current methods fall into several categories, each with distinct advantages and limitations:
ID document verification: The user uploads a photo of an official identity document (passport, ID card, driver's license). AI-powered systems extract the date of birth and verify the document's authenticity through security feature analysis. This is highly reliable but creates privacy concerns about sharing identity documents with commercial platforms.
Facial age estimation: AI analyzes a selfie or real-time camera feed to estimate the user's age. Modern systems can estimate age within a margin of approximately 1.5-2.5 years for users between 13 and 17. This method does not require sharing identity documents, but accuracy varies and there are concerns about bias across different ethnicities.
Third-party identity verification: Trusted intermediaries such as banks, mobile operators, or government digital identity systems verify the user's age and provide a token to the platform. This is the privacy-preferred model, as the platform never accesses the underlying identity data. The EU's eIDAS 2.0 framework (Regulation (EU) 2024/1183) and its European Digital Identity Wallet are expected to make this the standard approach across Europe.
Credit card or digital payment verification: Ownership of a credit card is used as a proxy for adult status. While simple to implement, this method is unreliable because children can access parents' cards, and some EU countries have different age requirements for financial products.
Open banking verification: Similar to the Dutch iDIN system, this uses the user's bank as a trusted age verifier. The bank has already completed KYC (Know Your Customer) identity verification, so it can confirm age without sharing additional personal data.
Privacy Concerns: The Verification Paradox
Age verification creates a fundamental tension with privacy rights. To protect children's privacy online, we may need to require all users, including adults, to submit to identity verification processes that themselves pose privacy risks. This paradox has been the central challenge for regulators and technologists alike.
The European Data Protection Board (EDPB) has issued guidelines stating that age verification systems must comply with the principles of data minimization and purpose limitation under GDPR. Systems should collect only the minimum data necessary to determine age, should not create profiles or track users across services, and should delete verification data immediately after the age determination is made.
Key privacy safeguards that any age verification system should incorporate:
- Data minimization: The system should only determine whether the user is above or below the age threshold, not their exact age or identity
- Separation of roles: The entity verifying age should be independent from the entity providing the service, preventing correlation between identity and online activity
- No data retention: Verification data (photos, documents, biometric estimates) should be deleted immediately after the age determination is made
- No tracking: The verification process should not create persistent identifiers that could be used to track users across platforms or over time
- Transparency: Users and parents must be clearly informed about what data is collected, how it is processed, and who has access to it
- Right to challenge: Users whose age is incorrectly estimated should have a clear, accessible process to challenge the determination
Platform Obligations Beyond Age Verification
Age verification is just one piece of the puzzle. European law increasingly requires platforms to create fundamentally different experiences for minor users. These obligations include:
Safe default settings: Accounts belonging to minors must default to the highest privacy settings. Profiles should be private by default, location sharing disabled, direct messaging from strangers blocked, and content recommendations set to the most restrictive level.
Algorithmic protections: The DSA requires that recommendation algorithms must not amplify content that could be harmful to minors. This includes content promoting eating disorders, self-harm, extreme dieting, dangerous challenges, or age-inappropriate material. Several platforms have introduced "teen experience" versions of their algorithms in response.
Time management tools: Some national laws now require platforms to provide built-in tools that help minors manage their screen time. These include session time limits, break reminders, night-time usage restrictions, and usage dashboards accessible to both the minor and their parent.
Reporting mechanisms: Platforms must provide accessible reporting tools designed for young users, including prominent reporting buttons, age-appropriate language in reporting forms, and fast-track review processes for reports involving minors.
What Parents Can Do Now
While regulation is catching up, parents remain the most important line of defense. Here are practical steps every parent in Europe should consider:
- Use platform parental controls: Every major platform now offers parental supervision features. Instagram's "Supervision" tool, TikTok's "Family Pairing," and YouTube's "Supervised Experiences" all allow parents to manage their child's experience without requiring access to the child's password.
- Set up device-level controls: Both Apple's Screen Time and Google's Family Link provide device-level controls that work across all apps, including time limits, app restrictions, content filtering, and location sharing.
- Have the conversation: Research consistently shows that open, non-judgmental conversation about online safety is more effective than technical restrictions alone. Discuss what information should never be shared online, how to recognize manipulative behavior, and what to do if something makes them uncomfortable.
- Review privacy settings together: Sit with your child and review the privacy settings on each platform they use. Make it a regular activity rather than a one-time exercise, as platforms frequently update their settings and defaults.
- Know where to report: Familiarize yourself with both platform-level reporting tools and national hotlines. The INHOPE network coordinates child safety hotlines across Europe, and each country operates its own reporting center (e.g., Internet Watch Foundation in the UK, Jugendschutz.net in Germany, Point de Contact in France).
- Model good behavior: Children learn from watching their parents. If you spend meals checking your phone or share their photos publicly without consent, they will internalize those norms. Practice the digital boundaries you want your children to adopt.
The European regulatory landscape around children and social media is evolving rapidly. As the EU Digital Identity Wallet rolls out in 2026 and beyond, age verification will become more reliable and less invasive. But technology and regulation alone are not sufficient. The most effective child protection combines robust legal frameworks, responsible platform behavior, effective technology, and engaged parenting. Europe is making strong progress on the first three; the last one remains in our hands.
Stay Informed
Get the latest safety insights delivered to your inbox.