×
ChatGPT gets parental controls requiring teen and parent approval
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI has launched parental controls for ChatGPT, marking a significant step toward making artificial intelligence safer for younger users. The new feature addresses a longstanding gap in AI safety: while ChatGPT has maintained a minimum age requirement of 13, parents previously had no way to monitor or limit how their teenagers used the popular AI assistant.

The timing reflects growing concerns about AI’s impact on young people, particularly as chatbots become increasingly sophisticated and integrated into daily life. These controls offer families a structured approach to AI interaction, balancing teenage independence with parental oversight in an emerging digital landscape.

How the new parental controls work

ChatGPT’s parental control system operates through linked accounts that require mutual consent between parents and teenagers. Neither party can impose restrictions unilaterally—both must agree to connect their accounts before any controls take effect.

Once linked, parents gain access to several key management tools. The system automatically enables reduced sensitive content filtering, which blocks graphic material and viral challenges that could be harmful to younger users. Parents can also control whether ChatGPT remembers previous conversations to provide personalized responses, a feature that involves storing chat history for improved interaction quality.

The controls extend to usage timing through “quiet hours,” allowing parents to set specific times when teenagers cannot access ChatGPT. This feature addresses concerns about excessive screen time and ensures AI usage doesn’t interfere with sleep, homework, or family time.

Additional restrictions include disabling access to ChatGPT’s voice interaction mode and image generation capabilities. Parents can also determine whether their teenager’s conversations contribute to OpenAI’s model improvement process, providing control over data usage for AI development.

Privacy and safety boundaries

OpenAI has designed the system with careful attention to teenage privacy. Parents cannot read their teenager’s actual conversations with ChatGPT under normal circumstances. The company will only share chat excerpts in rare cases where trained safety reviewers identify potential serious safety risks.

The system includes transparency measures to maintain trust. If teenagers disconnect their accounts from parental oversight, OpenAI automatically notifies parents of this change. This approach balances teenage autonomy with parental awareness, avoiding overly restrictive monitoring while ensuring parents stay informed about significant changes.

Setting up parental controls

Accessing the new controls requires navigating to the Accounts section within ChatGPT’s Settings menu, where a new “Parental Controls” option now appears. The interface uses intuitive slider controls to adjust various restrictions and permissions.

The setup process begins when either a parent or teenager sends an invitation through the parental controls interface. The receiving party must accept this invitation before any restrictions take effect. This mutual consent requirement prevents unilateral control while encouraging family discussions about appropriate AI usage.

Parents can adjust settings at any time through the control panel, allowing for flexibility as teenagers demonstrate responsibility or as family needs change. The system provides immediate feedback on setting changes, ensuring parents understand how each adjustment affects their teenager’s ChatGPT experience.

AI safety system overhaul

These parental controls arrive alongside a broader safety update affecting all ChatGPT users. OpenAI has implemented what it calls a “safety routing system” that automatically switches users to different AI models when conversations involve sensitive or emotional topics.

The system aims to provide more thoughtful responses during difficult conversations by routing users to specialized models designed for careful handling of sensitive content. When ChatGPT detects emotional distress or sensitive subject matter, it may switch mid-conversation to models specifically trained for these scenarios.

However, the system has faced criticism for being overly sensitive. Users report being switched to different models for relatively minor issues, such as mentioning a plant being knocked over in a storm, which prompted ChatGPT to respond with crisis-level reassurance: “Just breathe. It’s going to be okay. You’re safe now.”

This overcautious approach has frustrated paying subscribers who feel they’re being downgraded to inferior models despite their premium subscriptions. OpenAI acknowledges the system needs refinement and expects improvements as the technology matures.

Broader context and industry implications

These developments represent OpenAI’s response to mounting pressure from safety advocates, policymakers, and families concerned about AI’s influence on young people. The company has faced criticism following several high-profile incidents involving users in crisis situations while interacting with ChatGPT.

The parental controls also reflect the broader AI industry’s struggle to balance innovation with responsibility. As AI assistants become more sophisticated and human-like, questions about appropriate usage boundaries become increasingly complex, particularly for developing minds.

OpenAI worked with child safety experts, advocacy groups, and policymakers to develop these controls, suggesting a more collaborative approach to AI safety regulation. The company indicates these initial controls represent a starting point, with plans to expand and refine the system based on user feedback and evolving safety research.

Practical considerations for families

The rollout begins today for web users, with mobile applications receiving the update in the coming weeks. OpenAI has created dedicated resources to help parents understand the controls and determine appropriate settings for their families.

Families considering these controls should discuss expectations and boundaries before linking accounts. The mutual consent requirement provides an opportunity for conversations about responsible AI usage, digital citizenship, and the role of artificial intelligence in teenagers’ academic and social lives.

The controls work best when integrated into broader digital wellness strategies rather than serving as standalone solutions. Parents should consider these tools alongside existing screen time management, educational technology policies, and ongoing conversations about online safety and critical thinking skills.

ChatGPT is getting parental controls starting today – here’s what they do and how to set them up

Recent News