Responding to mounting concerns from lawmakers and the public, Meta has unveiled a fresh set of safety features aimed at shielding teenage users from online exploitation on Instagram. These updates include enhanced direct messaging restrictions and new alerts that help teens identify potentially harmful interactions before they escalate.

Safety Notices will now show teens more detailed information about people they interact with — including account creation dates and warning signs to watch for. Meta is also rolling out a streamlined process that lets teens block and report suspicious accounts in one simple step, reducing friction in protecting themselves on the platform.

According to Meta, teen users took action over 2 million times in June alone, either by blocking or reporting users after receiving safety alerts. These alerts are now a core part of the company’s broader efforts to prevent grooming and inappropriate interactions, especially with adult-run accounts that feature child content.

More than 135,000 Instagram accounts were recently removed for engaging in the sexualization of minors — a growing issue that Meta has pledged to tackle aggressively. These accounts were often caught leaving inappropriate comments or soliciting images. Additionally, half a million related accounts on both Instagram and Facebook were also shut down in the coordinated crackdown.

Going forward, all accounts operated by or for teens will automatically be set to the strictest privacy levels. This includes filtering offensive messages and blocking unsolicited contact from unfamiliar users. Even accounts representing younger children — managed by adults — are now under tighter restrictions to prevent abuse.

Policymakers have sharpened their focus on Meta’s responsibility, especially after allegations that the company’s products contribute to deteriorating mental health among youth. The Kids Online Safety Act, recently reintroduced in Congress, would hold tech giants legally accountable for failing to safeguard minors online.

Meta has also removed 10 million impersonator accounts this year alone — many of which mimicked popular creators to spread spam or scam users. That move is part of a wider campaign against fake accounts and misleading content, further signaling the platform’s shift toward more aggressive enforcement.

To read the full story, visit the official CNBC article here.