Meta has announced plans for Instagram to implement message scanning for users under 18, aiming to protect them from “inappropriate images.” The feature, set to be unveiled later this year, will even function on encrypted messages, indicating Meta’s intention to implement client-side scanning. However, the company clarified that the update won’t fulfil contentious demands for reporting inappropriate messages back to Instagram servers. Only the user’s device will determine whether a message has been filtered out, sparking criticism for what some see as Meta “grading its homework.”
“We’re planning to launch a new feature designed to help protect teens from seeing unwanted and potentially inappropriate images in their messages from people they’re already connected to,” stated Meta in a blog post. The company aims to discourage the sharing of such images by teens. More details about this feature, applicable even in encrypted chats, are expected to be shared later in the year.
This move follows Meta’s series of proposed changes in response to concerns that plans to encrypt direct messages on Facebook Messenger and Instagram could jeopardize the safety of children and young users. The described feature bears similarity to Apple’s “communication safety” setting introduced in 2023, which detects and blurs nude photos and videos sent to children’s devices, providing the option for the child to view them or contact a trusted adult.
The current plans fall short of the robust client-side scanning versions advocated by children’s safety groups, which would actively report inappropriate messages to service moderators, aiding in tracking and apprehending repeat offenders.
The announcement coincides with Meta facing a lawsuit in New Mexico, accusing it of failing to protect children on its platforms. According to an unsealed legal filing related to the case, Meta estimates that about 100,000 children using Facebook and Instagram experience online sexual harassment daily. Meta CEO Mark Zuckerberg is scheduled to appear before the US Congress on Wednesday, along with other social media executives, for a hearing on child safety.
“In addition to the promise of future scanning tools, Instagram has rolled out immediate updates to teenager safety features. Under-19s will default to privacy settings preventing direct messages from users they don’t follow. Previously, this restriction only applied to adults messaging teens,” Meta highlighted. Parents using Instagram’s “supervision” tools will now be prompted to actively approve or deny attempts by children under 16 to loosen safety settings.
Former Meta senior engineer Arturo Béjar expressed the need for regular updates on the number of unwanted advances teenagers experience on Instagram. Without such data, the impact of safety updates remains unclear. “This is another ‘we will grade our own homework’ promise. Until they start quarterly reporting on unwanted advances, as experienced by teens, how are we to know they kept their promise or the impact it had?” he remarked. Béjar’s research in 2021 revealed that one in eight children aged 13-15 on Instagram had received unwanted sexual advances. He emphasized the importance of ongoing reporting to assess the effectiveness of Meta’s promises and their impact on teen safety.