A Look at Upcoming Innovations in Electric and Autonomous Vehicles U.K. Age Verification Law Forces Millions of Adults to Submit Biometric Data

U.K. Age Verification Law Forces Millions of Adults to Submit Biometric Data

A law designed to protect children online is poised to fundamentally alter how every adult in the United Kingdom uses the internet. Under the Online Safety Act - passed by the House of Lords in October 2023 - and its forthcoming expansion, social media platforms operating in the U.K. may soon require users to submit government-issued identification, facial scans, or other biometric data simply to access services they currently use without restriction. With consultations closing on May 26, 2026, and government ministers making clear the question is not whether restrictions will arrive but how, the privacy implications for the country's estimated 55 million adult internet users are substantial.

What the Law Actually Requires - and Who It Affects

The policy is framed around protecting users under 16. As of April 27, 2026, the U.K. government committed to expanding social media restrictions for that age group, with the precise enforcement mechanisms to be determined through a late May consultation. Education minister Olivia Bailey put the government's position plainly in the House of Commons: "We will impose some form of age or functionality restrictions for children under 16. The status quo cannot continue."

The options under consideration include outright platform bans for under-16s, feature restrictions, time-based curfews, or combinations of all three. Platforms expected to fall under these rules include Facebook, Instagram, TikTok, X, YouTube, Snapchat, and Reddit. Communication platforms such as Discord are likely to be brought into scope as well.

The structural problem is straightforward: to restrict access for those under 16, platforms must be able to verify that everyone else is over 16. That means age verification is, in practice, a universal requirement - not a targeted one. The verification methods under discussion include uploading a government-issued ID, submitting a selfie or facial scan, using open banking or credit card checks, and digital identity wallets.

The Privacy Risk Hidden Inside a Child Safety Measure

The collection of biometric and identity data at this scale introduces risks that go well beyond the original policy goal. Unlike a compromised password, biometric data cannot be changed. A leaked facial scan or identity document creates a permanent vulnerability for the individual whose data was exposed. More than one billion identity records have been exposed globally through data breaches in recent years - a figure that underscores how frequently even well-resourced organizations fail to protect sensitive information.

Survey data from All About Cookies found that 79% of people cite data privacy as their primary concern about age verification laws. Notably, while 74% of Americans surveyed expressed support for age verification on social media platforms, only 20% considered age verification laws the most effective way to protect children online - suggesting that public appetite for the goal does not necessarily translate into confidence in the method.

The law also ends anonymous browsing on major platforms for anyone in scope. Once a platform has verified and linked your identity to your account, the separation between your real-world identity and your online activity collapses. There are currently no publicly confirmed standards for how this data must be stored, how long it can be retained, who can access it, or under what circumstances it can be shared with third parties or authorities.

What Adults Can Do Now

If the restrictions are applied at the network or location level - meaning platforms enforce them based on where a user's connection originates - a VPN can provide a practical workaround for adults. A VPN masks a device's IP address and routes traffic through a server in another country, making it appear as though the user is accessing the platform from outside the U.K. This sidesteps location-based enforcement.

The limitation is significant, however. If platforms implement age verification at the account level - requiring users to confirm their identity directly before accessing services - a VPN offers no protection. Some platforms, including Discord, have already begun testing account-level verification systems. Apple has also begun rolling out age checks for U.K.-based iPhone and iPad users to determine whether they qualify as adults for access to certain services.

VPNs remain legal for adults in the U.K. The Online Safety Act's Amendment 92 restricts their use by children under 16, but adult use is not affected. As of early May 2026, no implementation date has been set for the new rules. The consultation window closes May 26, giving the public a narrow remaining opportunity to engage with the process before enforcement mechanisms are finalized.

  • Platforms in scope: Facebook, Instagram, TikTok, X, YouTube, Snapchat, Reddit, Discord, and others
  • Verification methods under discussion: government ID upload, facial scan, open banking, digital identity wallets
  • VPNs: effective against location-based enforcement; ineffective against account-level verification
  • Consultation closes: May 26, 2026
  • No implementation date confirmed as of May 2, 2026

The broader tension this law exposes is one that governments across multiple countries are grappling with: child safety measures that require identity infrastructure impose costs on the entire population, not just the group being protected. Whether those costs - in privacy, in data security, in the erosion of anonymous expression - are proportionate to the benefits is a question the current consultation has yet to seriously answer.