Social media app Instagram is trialling new ways such as artificial intelligence to verify users’ age.

The Meta-owned company has came “under fire” from safety advocates to prevent teens from seeing harmful content and restricting underage use, according to Bloomberg.

The social media platform has began requiring users to prove that they are over 13 and eligible to use the app by submitting their birth date. It has also turned to several tech solutions including running users’ video selfies through an artificial intelligence that can determine if they are adults.

Additionally, Instagram introduced new privacy settings for 13- to 18-year-olds, including parental controls. Now, if someone tries to edit their profile to say they’re an adult, Instagram has a few options beyond submitting a personal identification card.

Starting in the US, Bloomberg said that Instagram will be accepting video selfies, which Meta will submit to the identity verification company Yoti. “Yoti’s technology estimates your age based on your facial features and shares that estimate with us,” Instagram said in a statement. “Meta and Yoti then delete the image.”

The meta-owned platform is making these changes as part of its commitment to raise its standards around protecting teenagers.

The promise came after a whistle-blower testified in October that Facebook had prioritised profit over the wellbeing of users, specifically teens.

Yoti said it trained the AI through “anonymous images of diverse people from around the world who have transparently allowed Yoti to use their data”. It has knowledge of what under-13s look like because of images obtained with parental consent, it added.

If users don’t want to submit a video or ID, they can also ask three adult users to vouch for them. Those users will get a request to confirm the person’s age, must respond within three days and must not vouch for anyone else at the same time.

Personalized Feed
Personalized Feed