Meta to protect teens from unwanted messages on IG and FB

  • Meta introduces new safety measures on Instagram and Facebook Messenger for minors, including default blocking of messages from strangers and enhanced parental supervision tools.
  • These measures follow regulatory scrutiny and allegations of the platforms contributing to child exploitation.
  • Meta aims to create a safer online environment for teens, addressing concerns over addictive platforms and their impact on adolescent mental health.

Meta has revealed new initiatives for its platforms, Instagram and Facebook Messenger, aimed at strengthening online safety for minors and preventing unnecessary harassment and harm. These measures include default blocking of messages from strangers for users under 16 (or 18 in certain regions), upgraded parental supervision tools, and a forthcoming feature to protect users from inappropriate images from existing friends.

Enhanced safety measures for teens

This announcement follows Meta’s efforts over the past year to protect minors, responding to allegations that its algorithms turned Facebook and Instagram into “trading markets for child predators.” The company, facing pressure in the U.S. and Europe over addiction concerns and its impact on adolescent mental health, hopes these stricter restrictions and enhanced parental control will effectively reduce illicit activities on the platforms and create a safer online environment for teenagers. Meta emphasises that all teen accounts will default to the strictest content controls on Instagram and Facebook, with additional search terms restricted on Instagram. The company states, “We want teens to have a safe, age-appropriate experience on our apps. Today, we announced more protections focused on the types of content teens see on Instagram and Facebook.” Despite the content’s sensitivity, even posts related to sensitive topics will be removed from teens’ feeds, ensuring a more age-appropriate experience. Meta acknowledges the pressure from regulators and responds to issues related to its platforms’ addictive nature and their impact on adolescent mental health.

Regulatory scrutiny push the protection

The company faces legal actions from 33 U.S. states, including California and New York, accusing it of repeatedly misleading the public about its platform’s dangers. In Europe, the EU Commission has demanded information from Meta on protecting children from illegal and harmful content. This regulatory scrutiny intensified after former Meta employee Arturo Bejar testified in the U.S. Senate, claiming the company was aware of harassment and other harms to minors on its platform but took no action. Bejar called for design changes on Facebook and Instagram to guide users toward positive behaviours and provide better tools for young people to manage unpleasant experiences. He revealed that his daughter received unwelcome invitations on Instagram, and his attempts to raise the issue with company executives were ignored. Meta’s executives dismissed his requests.

The competition for young users has been fierce in recent years between Meta and TikTok, with both vying for the younger demographic that may attract more advertisers seeking brand loyalty as these children grow older.

Also read: Meta Platforms passes $1 trillion in market cap

Chloe-Chen

Chloe Chen

Chloe Chen is a junior writer at BTW Media. She graduated from the London School of Economics and Political Science (LSE) and had various working experiences in the finance and fintech industry. Send tips to c.chen@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *