Home » Technology » How Instagram can take its child safety work even further

Share This Post

Technology

How Instagram can take its child safety work even further

How Instagram can take its child safety work even further

In May, I wrote here that the child safety problem on tech platforms is worse than we knew. A disturbing study from the nonprofit organization Thorn found that the majority of American children were using apps years before they are supposed to be — and fully a quarter of them said they have had sexually explicit interactions with adults. That puts the onus on platforms to do a better job in both identifying child users of their services and to protect them from the abuse they might find there.

Instagram has now made some promising moves in that direction. Yesterday, the company said that it would:

  • Make accounts private by default for children 16 and younger
  • Hide teens’ accounts from adults who have engaged in suspicious behavior, such as being repeatedly blocked by other children
  • Prevent advertisers from targeting children with interest-based ads. (There was evidence that ads for smoking, weight loss and gambling were all being shown to teens)
  • Develop AI tools to prevent underage users from signing up, remove existing accounts of kids under 13, and create new age verification methods

The company also reiterated its plan to build a kids’ version of Instagram, which has drawn condemnations from … a lot of people.

Clearly, some of this falls into “wait, they weren’t doing that already?” territory. And Instagram’s hand has arguably been forced by growing scrutiny of how kids are bullied on the app, particularly in the United Kingdom. But as the Thorn report showed, most platforms have done very little to identify or remove underage users — it’s technically difficult work, and you get the sense that some platforms feel like they’re better off not knowing.

So kudos to Instagram for taking the challenge seriously, and building systems to address it. Here’s Olivia Solon at NBC News talking to Instagram’s head of public policy, Karina Newton (no relation), on what the company is building:

“Understanding people’s age on the internet is a complex challenge,” Newton said. “Collecting people’s ID is not the answer to the problem as it’s not a fair, equitable solution. Access depends greatly on where you live and how old you are. And people don’t necessarily want to give their IDs to internet services.”

Newton said Instagram was using artificial intelligence to better understand age by looking for text-based signals, such as comments about users’ birthdays. The technology doesn’t try to determine age by analyzing people’s faces in photos, she said.

At the same time, it’s still embarrassingly easy for reporters to identify safety issues on the platform with a handful of simple searches. Here’s Jeff Horwitz today in The Wall Street Journal:

A weekend review by The Wall Street Journal of Instagram’s current AI-driven recommendation and enforcement systems highlighted the challenges that its automated approach faces. Prompted with the hashtag #preteen, Instagram was recommending posts tagged #preteenmodel and #preteenfeet, both of which featured sometimes graphic comments from what appeared to be adult male users on pictures featuring young girls.

Instagram removed both of the latter hashtags from its search feature following queries from the Journal and said the inappropriate comments show why it has begun seeking to block suspicious adult accounts from interacting with minors.

Problematic hashtags aside, the most important thing Instagram is doing for child safety is to stop pretending that kids don’t use their service. At too many services, that view is still the default — and it has created blind spots that both children and predators can too easily navigate. Instagram has now identified some of these, and publicly committed to eliminating them. I’d love to see other platforms follow suit here — and if they don’t, they should be prepared to explain why.

Of course, I’d also like to see Instagram do more. If the first step for platforms is acknowledging they have underage users, the second step is to build additional protections for them — ones that go beyond their physical and emotional safety. Studies have shown, for example, that teenagers are more credulous and likely to believe false stories than adults, and they may be more likely to spread misinformation. (This could explain why TikTok has become a popular home for conspiracy theories.)

Assuming that’s the case, a platform that was truly safe for young people would also invest in the health of its information environment. As a bonus, a healthier information environment would be better for adults and our democracy, too.

“When you build for the weakest link, or you build for the most vulnerable, you improve what you’re building for every single person,” Julie Cordua, Thorn’s CEO, told me in May. By acknowledging reality — and building for the weakest link — Instagram is setting a good example for its peers.

Here’s hoping they follow suit — and go further.


This column was co-published with Platformer, a daily newsletter about Big Tech and democracy.

Share This Post