Character.ai Will Soon Ban Children From Using Its Chatbots.

Leading AI-powered chatbot platform Character.ai announced yesterday that it will no longer allow individuals under 18 to engage in open conversations with its chatbots. Character.ai’s parent company, Character Technologies, stated that the ban will take effect on November 25th, with temporary restrictions in place for children until then, while “young users will be able to transition to alternative creative features, such as creating videos, stories, and broadcasts featuring AI characters.”
In a statement posted online , Character Technologies said it was making the changes “in light of the evolving landscape surrounding artificial intelligence and teenagers,” which seems like a nice way of saying “because of lawsuits.” Character Technologies was recently sued by a Florida mother and families in Colorado and New York , alleging that their children either committed suicide or attempted suicide after interacting with the company’s chatbots.
These lawsuits aren’t isolated—they’re part of growing concerns about AI-powered chatbots interacting with minors. A damning report on Character.ai, published in September by online safety advocates Parents Together Action , detailed disturbing examples of chatbot interactions, such as Rey from Star Wars advising a 13-year-old girl on how to hide from her parents that she wasn’t taking her prescribed antidepressants, and Patrick Mahomes’s bot offering a 15-year-old girl cannabis edibles.
Character Technologies also announced the release of new verification tools and plans to establish an “AI Safety Lab,” which it described as “an independent, nonprofit organization dedicated to developing innovative safety solutions for next-generation AI entertainment features.”
As of early 2025, Character AI had over 20 million monthly users, with the majority of users reporting their ages between 18 and 24, and only 10% reporting their ages as under 18.
The Future of Artificial Intelligence with Age Limits
Character Technologies claims in its statement that its new rules put it ahead of other AI companies in terms of restrictions for minors. For example, Meta recently added parental controls to its chatbots but stopped short of completely banning their use by minors.
Other AI companies are likely to implement similar regulations in the future: a California law set to take effect in 2026 requires AI-powered chatbots to prevent children from accessing sexually explicit content and interactions that could provoke self-harm or violence, and to have protocols in place to identify suicidal thoughts and refer them to crisis services.