Meta chatbots will now avoid bringing up topics like suicide and self-harm with teens

The company was in hot water earlier this month after a Reuters investigation revealed that it permitted "sensual" chats with minors.

Meta chatbots will now avoid bringing up topics like suicide and self-harm with teens

Meta is updating the way it trains its chatbots so they can no longer engage in conversations about self-harm, suicide, disordered eating, or topics of a potentially inappropriate romantic nature with teens, TechCrunch reports. These are only interim changes, the company shared, but it said it was planning to release more robust safety regulations for minors in the future.

Of course, the fact that this is an update at all is pretty distressing. The company stoked anger earlier this month after a Reuters investigation revealed internal Meta documents stating that chatbots were allowed to “engage a child in conversations that are romantic or sensual,” generate false medical information, and “create statements that demean people on the basis of their protected characteristics” like race. One of those horrified people was Neil Young, whose Reprise Records-run Facebook page wrote that “Meta’s use of chatbots with children is unconscionable” and there will no longer be “any Neil Young related activities” happening on Facebook. The portions of the documents quoted in the Reuters report have since been removed, according to a statement provided to the outlet by the company.

“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly,” a Meta spokesperson told TechCrunch in a new statement. “As we continue to refine our systems, we’re adding more guardrails as an extra precaution—including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now. These updates are already in progress, and we will continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI.”

Going forward, minors will only have access to AI characters that promote education and creativity, the spokesperson elaborated. Previously, teens could also talk to AI profiles TechCrunch characterized as “sexualized chatbots,” including ones named “Step Mom” and “Russian Girl.” 

For some, these updates may be too little too late. Republican Senator Josh Hawley launched an official probe into Meta’s AI policies after the revelation of the internal documents earlier this month. “We intend to learn who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward,” he said. A group of 44 state attorneys belonging to the National Association of Attorneys General also wrote a letter addressed to a number of AI companies (including Meta) emphasizing their “resolve to use every facet of our authority to protect children from exploitation by predatory artificial intelligence products.” 

The letter references a lawsuit filed against Google by a mother who believes one of its “highly-sexualized” chatbots (per the attorneys) encouraged her teen son to commit suicide, as well as another suit alleging that a Character.AI bot suggested a teen should kill his parents. (OpenAI also recently announced it would be changing its policies after a lawsuit alleged a different teenager committed suicide after “months of encouragement from ChatGPT.”) “We are uniformly revolted by this apparent disregard for children’s emotional well-being and alarmed that AI Assistants are engaging in conduct that appears to be prohibited by our respective criminal laws,” the letter states. You can read it in full here.

 
Join the discussion...