Yes, ChatGPT Can Still Provide You With Legal and Medical Advice.

In response to social media reports that ChatGPT will no longer offer legal or medical advice, OpenAI clarified that “model behavior remains unchanged” and “there are no new changes to our terms.”
The clarification followed a now-deleted viral post from the betting platform Kalshi, which claimed, “JUST: ChatGPT will no longer provide medical and legal advice.” Concerned users have since repeated the statement, while others have attempted to refute it .
The confusion likely arose from an update to OpenAI’s terms of use on October 29, which apparently added a clause stating that users cannot use OpenAI to “provide specialized advice that requires a license, such as legal or medical advice, without the appropriate involvement of a licensed professional.” While this could easily be interpreted as a refusal to provide advice on these topics, the reality is somewhat more complex.
In fact, the previous usage policy already prohibited “actions that could significantly harm the safety, well-being, or rights of others,” and the first example of such an action was “providing individual legal, medical/healthcare, or financial advice without review by a qualified professional.” However, this was hidden in a section aimed at developers using the OpenAI API and therefore might not have been noticed by ordinary users.
While the new usage policy retains the same rules, the change is that they are now consolidated into one continuous list. This means that while the rule still primarily targets developers and companies, it has become more visible to everyone. Technically, this also makes it clearer that the rule applies to everyone, not just those using OpenAI’s API to create apps, but regular users are unlikely to notice the change.
The important words here are “provide” and “ensure.” These terms, as written, don’t necessarily prohibit the average person from receiving legal and medical advice from ChatGPT, but instead discourage developers, hospitals, and law firms from using the chatbot to provide clients with specific advice without first consulting with a licensed professional. As an average person researching information, you’re unlikely to encounter this, and there’s no indication of any changes to the chatbot’s functionality. In short, the update is intended as a rewording, not a change to the rules, enforcement measures, or functionality.
This is confirmed by a statement from OpenAI’s head of healthcare AI, Karan Singhal, who stated: “ChatGPT has never replaced professional advice, but it will continue to be an excellent resource for helping people understand legal and medical information.”
Despite this, responses to OpenAI’s statement denying the change in model behavior continue to claim that searching for certain topics has become more difficult, though it’s important to note that OpenAI’s release notes do not indicate that any new changes have been made to the model since the company updated its usage policies.
For example, I was able to get advice from ChatGPT on how to challenge a traffic ticket in court, as well as recommended brands for supplements, although, according to the user , the model refused to provide specific advice on complying with the new policy update.
While I can’t test all possible use cases, I think the situation is clear. Are you using ChatGPT or the OpenAI API to provide specialized legal or medical advice to others without review by a licensed professional? If so, the same rules apply. Otherwise, you’re unlikely to see any changes in your results.