OpenAI is introducing new parental controls for ChatGPT on web and mobile, letting parents link accounts and better supervise their teens’ interactions with the AI.
The move comes after a California lawsuit following the tragic death of a 16-year-old, whose parents claim the chatbot gave him detailed instructions on self-harm.
With these new safeguards, OpenAI aims to make the platform safer for younger users while giving families more control over how the AI is used.
Under the new controls, parents will be able to limit their teen’s exposure to sensitive content, manage whether ChatGPT retains memory of past conversations, and choose if interactions can be used to train OpenAI’s models, the Microsoft-backed company stated on X.
Parents will also have the option to set quiet hours to restrict access at certain times, and to disable features like voice mode, image generation, and editing, OpenAI added. However, they will not be able to view a teen’s chat transcripts.
OpenAI will notify parents in times of distress such as when the AI recognizes that the teen may be thinking about harming themselves.
“If our systems detect potential harm, a small team of specially trained people reviews the situation. If there are signs of acute distress, we will contact parents by email, text message and push alert on their phone, unless they have opted out.” the company explained in its blogpost.
