ChatGPT Joins the AI-Powered Age Verification Trend.

When OpenAI first announced GPT-5.2 last month , it quietly revealed a new safety feature it called “age prediction.” Given that ChatGPT isn’t an “all-ages” tool, it’s logical to assume that users under 18 should be protected from harmful content. The company claims that users who identify themselves as under 18 already receive a modified interface to “reduce exposure to sensitive or potentially harmful content,” but if a user doesn’t voluntarily disclose their age to OpenAI, how does the company ensure this protection? That’s where age prediction comes in.

How does age prediction work in ChatGPT?

On Tuesday, OpenAI officially announced its new age prediction policy , which, like other age verification systems used by companies like Roblox, uses artificial intelligence to determine a user’s age. If the system determines a user is under 18, OpenAI will adjust user interactions accordingly, striving to ensure all interactions are age-appropriate.

Here’s how it works: the new age prediction model analyzes both user behavior within the app and general account data. This includes things like account age, the time of day the user accesses ChatGPT, usage patterns, and, of course, the user’s stated age. Based on all this data, the model determines the user’s probable age. If the model believes the user is over 18, they will be granted full access to the app; if the model believes they are under 18, they will be granted “safer access.” If the model is unsure, it defaults to the safer access.

You may also like

What are the limitations of the “more secure” version of ChatGPT?

Due to limited experience, a person the model believes is under 18 will attempt to limit the following types of content:

  • graphic violence or dismemberment

  • Viral challenges that may encourage “risky or harmful behavior”

  • Role-playing games of a sexual, romantic or violent nature.

  • Description of cases of self-harm

  • Content that promotes “extreme” beauty standards, unhealthy diets, or body shaming.

The company states that its approach is based on “expert opinion” as well as the literature on child development. (It is unclear how much of this data is derived from direct interviews and consultation with experts, and how much, if any, is derived from independent research.) The company also acknowledges “known differences in adolescents’ risk perception, impulse control, peer influence, and emotional regulation” compared to adults.

Artificial intelligence isn’t always good at predicting age.

The biggest risk associated with any of these age prediction models is that they can sometimes get it wrong —hallucinations are, unfortunately, a common trait of all AI models . This works both ways: you don’t want a user too young to access inappropriate content on ChatGPT, but you also don’t want a user over 18 to have their account restricted for no reason. If you’re facing the latter situation, OpenAI has a solution for you: direct age verification through Persona. This is the same third-party service that Roblox uses for age verification, and it hasn’t worked very well yet.

What do you think at the moment?

This doesn’t necessarily spell doom for OpenAI. Roblox tried to redesign its age verification system for a huge user base accustomed to a certain type of multiplayer experience, which resulted in users being unable to communicate with others in the new age categories, which were often incorrect. Meanwhile, the ChatGPT age prediction system only monitors the experience of one user at a time. Therefore, OpenAI will allow selfie uploads as an additional verification step if the prediction model alone isn’t sufficient. Interestingly, OpenAI hasn’t mentioned the option of uploading an ID for verification, which other companies like Google offer .

I’m not a big fan of age prediction models, as I believe they often sacrifice user privacy for the sake of creating age-appropriate content. But there’s no doubt that OpenAI should do something to limit the full ChatGPT experience for younger users. Many ChatGPT users are under 18, and much of the content they encounter is completely inappropriate , whether it’s drug use instructions or advice on writing suicide notes. In some tragic cases , minors have committed suicide after interacting with ChatGPT, leading to lawsuits against OpenAI.

I don’t have any definitive answers here. We’ll just have to see how this new age prediction model impacts the user experience for both minors and adults, and whether it can truly create a safer experience for younger, more impressionable users.

More…

Leave a Reply