Rep. Jimmy Patronis, R-FLA., Discusses the role of social media in the shooting in Minneapolis School, a lawsuit against OpenAi by the parents of a teenager who warned suicide and a watchdog of a meteorologist.
This story discusses suicide. If you or someone you know have suicide thoughts, contact the Lifeline Suicide & Crisis on 988 or 1-800-273 talk (8255).
OpenAI, the company behind Chatgpt, an artificial intelligence chatbot, rolls extensive parental supervision that are intended to make his technology safer for teenagers. The launch is expected to take place for the next 120 days.
The company has already announced plans to update its model to support wrestling users, keep track of how much time users spend on the app and help with personal challenges. However, OpenAi said that recent cases of users, including teenagers, who turn to the app during acute crises, encourages the company to explain the rollout of new policy.
In this photo, the OpenAi logo is displayed on a mobile phone screen with Chatgpt -logo in the background. (Photo -Ilustration by Idrees Abbas/Sopa images/Lighttrocket via/getty images)
NVIDIA CEO: We are at the start of the AI ​​revolution
Within the following month, parents can link their accounts to the accounts of their teenagers, determine how Chatgpt responds to their teenager, manage memory and chat history functions and receive notifications if their child uses the technology in a moment of acute need, according to OpenAi.
“We have seen people turn on it at the most difficult moments. That is why we continue to improve how our models recognize and respond signs of mental and emotional need, led by expert input,” the company said in a statement.

High school students use laptops during a class in the classroom. (Izusek / Getty images)
CEO predicts which companies ‘still successful’ while AI ‘software eats’
OpenAi emphasized the role that medical and mental health workers played in his push to improve his product. Although it has already compiled a Council of Experts, it will expand the group to include people with “deep expertise” in eating disorders, substance use and health of adolescents.
“Earlier this year we started to call in a Council of Experts in the development of young people, mental health and interaction between people and computer,” Openai said. “Their input will help us to define and measure well-being, to determine priorities and design future guarantees-as future iterations of parental controls with the latest research in mind. Although the council will advise on our product, research and policy decisions, OpenAI remains responsible for the choices we make.”

OpenAI consults mental health care professionals in developing chatgpt parental supervision. ( / Istock)
Get Fox Business on the Go by clicking here
The rollout will be confronted with a lawsuit in California by the parents of the 16-year-old Adam RaineHe took his own life in April 2025 after consulting chatgpt for mental health care.
When his parents searched for clues after the tragedy, they discovered that he had interaction with Chatgpt. In their lawsuit, Raine’s parents state that “Chatgpt Adam has actively helped explore suicide methods.”
OpenAI says that the protocols that roll it out for the next 120 days are only the start of a longer process, because the company works to make AI safer and more useful.


