Meta tests features to protect teens on Instagram

Meta’s efforts underscore the ongoing debate surrounding online safety for younger users.

 Electronics, Phone, Mobile Phone, Texting, Person, Text

Meta, Instagram’s parent company, has announced plans to trial new features aimed at protecting teens by blurring messages containing nudity. This initiative is part of Meta’s broader effort to address concerns surrounding harmful content on its platforms. The tech giant faces increasing scrutiny in the US and Europe amid allegations that its apps are addictive and contribute to mental health issues among young people.

The proposed protection feature for Instagram’s direct messages will utilise on-device machine learning to analyse images for nudity. It will be enabled by default for users under 18, with Meta urging adults to activate it as well. Notably, the nudity protection feature will operate even in end-to-end encrypted chats, ensuring privacy while maintaining safety measures.

Meta is also developing technology to identify accounts potentially involved in sextortion scams and is testing new pop-up messages to warn users who may have interacted with such accounts. These efforts come after Meta’s previous announcements regarding increased content restrictions for teens on Facebook and Instagram, particularly concerning sensitive topics like suicide, self-harm, and eating disorders.

Why does it matter?

The company’s actions follow legal challenges, including a lawsuit filed by 33 US states alleging that Meta misled the public about the dangers of its platforms. The European Commission has also requested information on Meta’s measures to protect children from illegal and harmful content in Europe. As Meta continues to navigate regulatory and public scrutiny, its focus on enhancing safety features underscores the ongoing debate surrounding social media’s impact on mental health and well-being, especially among younger users.