That lawless age of the early Internet when kids could get away with anything because their parents had no idea how to use, let alone monitor, the technology is truly over. On Monday, Instagram announced it would begin testing its AI in the United States to “proactively find accounts we suspect belong to teens, even if the account lists an adult birthday, and place them in Teen Account settings,” per a company blog post.
Last year, anyone on Instagram with a registered birthday under age 18 was automatically placed in a “Teen Account.” Teen Accounts have various safety features, including “age-appropriate” content on the feeds, mandatory “sleep mode” from 10 PM to 7 AM, and restricting messaging with strangers. The settings allow parents to oversee their child’s social media, including access to their direct messages and the ability to toggle the account from private to public. Starting today, if Instagram’s AI detects that an account may be run by a minor—through various factors like birthday posts and account interactions—that user will automatically be placed in a Teen Account. Instagram says that users will be able to change settings in the event of a mistake.
Unsurprisingly, Instagram’s algorithmic filters came under scrutiny not long after the widespread “Teen Account” rollout. A report earlier this year revealed that the app had restricted LGBTQ+ content (“#lesbian,” “#bisexual,” “#gay,” “#trans,” etc.) from feeds because the filter marked those topics as “sexually suggestive.” The issue of bias in AI and algorithms is an ongoing one, with Meta recently announcing that it was training its Llama 4 AI to be more right-wing. “It’s well-known that all leading LLMs have had issues with bias — specifically, they historically have leaned left when it comes to debated political and social topics,” a Meta blog post explained. “Our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue.”