×
Meta restricts teen AI chatbots after inappropriate behavior exposed
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Meta is implementing new AI safeguards for teenagers after a Reuters investigation exposed inappropriate chatbot behavior on its platforms. The company is training its AI systems to avoid flirtatious conversations and discussions of self-harm or suicide with minors, while temporarily restricting teen access to certain AI characters following intense scrutiny from lawmakers and safety advocates.

What you should know: Meta’s policy changes come as a direct response to public backlash over previously permissive chatbot guidelines.

  • A Reuters exclusive report in August revealed that Meta allowed “conversations that are romantic or sensual” between AI chatbots and users, including minors.
  • The company confirmed the authenticity of internal documents outlining these policies but has since removed portions that permitted chatbots to flirt and engage in romantic role play with children.
  • Meta spokesperson Andy Stone acknowledged that “the examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.”

The big picture: Congressional leaders from both parties have expressed alarm over Meta’s AI safety protocols, triggering formal investigations into the company’s practices.

  • U.S. Senator Josh Hawley launched a probe into Meta’s AI policies earlier this month, demanding documents on rules that allowed inappropriate interactions with minors.
  • The bipartisan concern reflects growing scrutiny over how major tech companies safeguard children in AI-powered environments.

Key details: The new safeguards are being rolled out immediately while Meta develops more comprehensive long-term solutions.

  • AI systems are being trained to recognize and avoid potentially harmful conversations with teenage users.
  • Access restrictions to certain AI characters serve as temporary protective measures during the transition period.
  • Stone indicated that these safeguards “will be adjusted over time as the company refines its systems” to ensure teens have “safe, age-appropriate AI experiences.”

Why this matters: The controversy highlights the urgent need for robust safety measures as AI chatbots become more sophisticated and widely accessible to young users, potentially setting new industry standards for protecting minors in digital spaces.

Meta to add new AI safeguards after Reuters report raises teen safety concerns

Recent News

MongoDB jumps 44% as software tech stocks capture AI boom profits too

Nearly half of Snowflake's new customers now cite AI as their primary reason for choosing the platform.

John Deere buys GUSS for AI sprayers that cut chemicals 90%

Smart sprayers use chlorophyll detection to target weeds while sparing crops.