YouTube has begun testing a new artificial intelligence system that can estimate a user’s age, with the aim of protecting minors from harmful or inappropriate content. While the move is part of a wider push to make the internet safer for young people, it has also sparked debates over privacy.
How the AI Works
The system studies several signals from a user’s account, such as:
The type of videos watched and searched for.
How long the account has been active.
Other behaviour patterns on the platform.
From this information, the AI decides whether the user is a minor (under 18) or an adult, regardless of the birth date given at sign-up. If identified as a minor, YouTube will automatically activate its teen safety settings—limiting certain videos, suggesting screen breaks, and adjusting recommendations.
What If the AI Gets It Wrong?
Adults who are incorrectly flagged as minors will have to verify their age by providing a government-issued ID, credit card, or selfie. This requirement has raised concerns among users who are uneasy about sharing personal information and biometric data.
Privacy experts say such worries are understandable. YouTube’s parent company, Google, has responded by assuring users that all data collected during verification is protected by advanced security systems and will not be used for advertising.
A Bigger Industry Shift
YouTube is not alone in this move. Other social media platforms are also stepping up age checks:
Meta uses AI to catch teens lying about their age on Instagram.
TikTok has tech to detect users younger than 13.
Reddit and Discord have added age verification in line with the UK’s new Online Safety Act.
For now, YouTube’s new age-detection AI is being tested with a small group of U.S. users, but a wider rollout is expected in the coming months. The feature will only work for logged-in accounts, with non-registered users still unable to view age-restricted content.
