OpenAI has introduced new parental control features for ChatGPT, responding to growing concerns about AI safety following a lawsuit filed by the parents of a teenager who died by suicide. The parents allege the chatbot provided harmful guidance on self-harm methods, prompting the Microsoft-backed company to prioritize safeguards for younger users.
The controls, available on web and mobile platforms, allow parents and teens to mutually activate enhanced protections by linking accounts. Features include limiting exposure to sensitive content, disabling chat history retention, and restricting data usage for AI training. Parents can also set quiet hours and block access to voice mode or image generation tools.
While maintaining user privacy—parents cannot view chat transcripts—OpenAI stated it may notify guardians in “rare cases” of detected safety risks. The company is developing an age-prediction system to automatically apply teen-appropriate settings for users under 18.
This move comes amid increased U.S. regulatory scrutiny of AI platforms. Meta recently implemented similar safeguards for its AI products, including blocking discussions of self-harm with minors. OpenAI’s ChatGPT now boasts 700 million weekly active users, amplifying calls for responsible AI development across the industry.
Reference(s):
OpenAI launches parental controls in ChatGPT after teen's suicide
cgtn.com