Meta will start using AI to scan photos and videos for visual clues to see if a user is under 13 and should be removed from Facebook and Instagram, the company announced on Tuesday. These visual clues include a person’s height or bone structure, it said.
“We want to be clear: this is not facial recognition,” Meta explained in its blog post. “Our AI looks at general themes and visual cues, for example height or bone structure, to estimate someone’s general age; it does not identify the specific person in the image. By combining these visual insights with our analysis of text and interactions, we can significantly increase the number of underage accounts we identify and remove.
The visual analysis system is now operating in select countries, but Meta says it’s working toward a broader rollout.
Meta says this system is part of its efforts to keep kids under 13 off its platforms. These efforts include using AI to analyze entire profiles for contextual clues, such as birthday celebrations or mentions of school grades. The company looks for these signals across different formats, such as posts, comments, bios, captions, and more. Meta plans to expand this technology to more parts of its apps, including Instagram Live and Facebook Groups, in the future.
If Meta determines that a person may be underage, it will deactivate their account, and the user will need to prove their age using the company’s age verification process in order to prevent their account from being deleted.
The announcement comes weeks after a New Mexico jury ordered Meta to pay $375 million in civil penalties for misleading consumers about the safety of its platforms and putting children at risk. The company was also ordered to implement fundamental changes to its platforms. Meta has since threatened to shut down its social media services in the state.
It’s worth noting that this case is one of many lawsuits that Meta and other Big Tech companies are facing over child safety.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Meta also announced on Tuesday that it’s expanding its technology that automatically places teens into stricter “Teen Accounts” on Instagram to 27 countries in the EU and Brazil. These teen accounts place users into a stricter account experience with additional safeguards, such as receiving DMs only from people they follow or are already connected to, hiding harmful comments, and setting accounts to private by default.
Additionally, Meta said it’s expanding the technology to Facebook in the U.S. for the first time, followed by the U.K. and EU in June.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
