
Meta launches AI-powered teen safety and age assurance measures in Pakistan across Instagram, Facebook, and Messenger.
Meta has introduced new AI-powered age assurance and teen safety measures across its platforms in Pakistan, as the company intensifies efforts to create safer online environments for younger users amid growing global scrutiny over social media’s impact on teenagers.
The update applies across Instagram, Facebook, and Messenger, where teens are automatically placed into age-appropriate experiences, including Meta’s Teen Accounts system. These accounts come with built-in restrictions that limit who can contact teenagers, the type of content they can view, and the interactions they can receive.
According to Meta, determining a user’s real age online remains one of the industry’s biggest challenges. To address this, the company is deploying advanced AI systems capable of analysing contextual signals across user profiles to identify potentially underage users, even when adult birth dates are provided during registration.
The company said its AI models assess factors including birthday-related posts, school references, captions, comments, bios, and other behavioural indicators across formats such as Instagram Reels, Facebook Groups, and live content. Accounts identified as potentially underage may be temporarily disabled until users complete an age verification process.
Meta is also expanding the use of AI-driven visual analysis technology to estimate general age ranges through visual cues such as facial structure and physical appearance. The company emphasised that the system is not facial recognition technology and does not identify specific individuals. Instead, it evaluates broader age-related patterns to improve underage account detection.
The initiative also introduces AI-assisted moderation for reports submitted by users regarding underage accounts. Meta said its testing showed that AI-supported review systems delivered faster and more accurate decisions compared to traditional manual review processes alone.
The latest rollout comes as regulators globally continue to pressure major technology platforms to strengthen child safety protections online. Meta has repeatedly argued that app stores and operating systems should also play a greater role in age verification, rather than placing the entire burden on individual applications.
The company believes centralised age assurance at the operating system or app store level would provide a more consistent and privacy-focused framework for youth protection across digital platforms.
Meta’s expanded safety push in Pakistan reflects the increasing importance of AI-driven moderation systems as social media companies attempt to balance user growth, privacy concerns, regulatory pressure, and digital safety standards in rapidly growing internet markets.
