Openai wants you to prove that you are not a child

If you are filled with too much childish wonder, you could be relegated to a version more suitable for children of Chatgpt. OPENAI announced on Tuesday that it planned to implement a new age verification system that will help filter mine users in a new chatbot experience that is more suitable for age. The change comes while the company faces a meticulous examination of legislators and regulators on how minor users interact with its chatbot.
To determine the age of a user, Openai will use an age prediction system that tries to estimate the age of a user based on the way he interacts with chatgpt. The company said that when he believes that a user is under the age of 18, or when he cannot clearly determine, he will filter them in an experience designed for young users. For users who are placed in age experience when they are over 18, they will have to provide a form of identification to prove their age. And access the full version of Chatgpt.
According to the company, this version of the chatbot will block the “graphic sexual content” and will not respond in attractive or sexually explicit conversations. If a user under 18 expresses distress or suicidal ideas, he will try to contact the parents of users and can contact the authorities if there are concerns of “imminent damage”. According to Openai, his experience for adolescents prioritizes “security before privacy and freedom”.
Openai offered two examples of how he delimits these experiences:
For example, the default behavior of our model will not lead to a lot of flirtatious discourse, but if an adult user requests it, he should obtain it. For a much more difficult example, the default model should not provide instructions on how to commit suicide, but if an adult user requires help to write a fictitious story that represents suicide, the model should help this request. “Treat our adult users as adults”, this is the way we talk about this internal freedom, extending as much as possible without causing harm or undermining someone else’s freedom.
Openai is currently the subject of an unjustified death trial filed by the parents of a 16 -year -old who followed his life after expressed suicidal thoughts to Chatgpt. During the child’s conversation with the chatbot, he shared self-proof evidence and expressed his plans to try to commit suicide, the platform of which reported or high in a way that could lead to the intervention. Researchers have discovered that chatbots like Chatgpt can be invited by users for advice on how to engage in self -harm or commit suicide. Earlier this month, the Federal Trade Commission asked for information from Openai and other technological companies on the impact of their chatbots on children and adolescents.
This decision makes OPENAI the last company to participate in the tendency of age verification, which swept the Internet this year-transferred by the decision of the Supreme Court according to which a law of Texas which requires that the pornographic sites check the age of their users were constitutional and, according to the requirement of the United Kingdom, online platforms check the age of users. While some companies have forced users to download a form of identification to prove their age, platforms like YouTube have also opted for age prediction methods like Openai, a method that has been criticized as inaccurate and frightening.
https://gizmodo.com/app/uploads/2025/09/sam_altman_open_AI-1200×675.jpg