The launch of Instagram’s new PG-13 safety system for teens creates a tale of two conflicting narratives. On one side, Meta is promoting a story of robust new tools and parental empowerment. On the other, a recent report involving a company whistleblower tells a story of profound and ongoing failure.
Meta’s narrative is that the new “13+” default setting is a powerful solution. The company highlights that it filters a wide range of sensitive content and requires parental permission to disable, framing it as a major step forward in teen safety.
In stark contrast, the report co-authored by former Meta engineer Arturo Béjar paints a grim picture. It concluded that two-thirds of the platform’s new safety tools were ineffective and that “Kids are not safe on Instagram.” This narrative suggests that any new tool from the company should be viewed with extreme skepticism.
These two reports are now in direct opposition. Meta’s announcement is a direct rebuttal to the whistleblower’s claims, an attempt to prove its critics wrong with a tangible new product.
For the public, parents, and regulators, the challenge is to determine which narrative is closer to the truth. Safety advocates argue that the only way to resolve this is through transparency and independent verification, which would either validate Meta’s claims or confirm the whistleblower’s warnings.
