Now You Can Control How Long GPT-5 Will “think”

When OpenAI announced its latest model, GPT-5 , last week, the company probably didn’t expect the reaction it received. While previous launches were met with enthusiasm from AI proponents, the loudest reviews were extremely critical .
The biggest complaint was that GPT-5 removed all the legacy models, including the fan-favorite GPT-4o, which was annoying for users with certain workflows, as well as those who had become emotionally attached to those models. GPT-5 also claimed to “intelligently” switch between its own models based on user input, but it was initially unclear which model you were interacting with, which further annoyed users. In short, it was pretty confusing.
Since then, OpenAI has been in damage control mode. The company brought back GPT-4o for paid subscribers , and then other legacy models, which should satisfy users who were missing the specifics of how a particular model worked. But GPT-5 itself has also been reworked, as Sam Altman announced at X : if you click on the model selector, you’ll see different models to choose from, giving you more control over GPT-5’s performance.
Here are the different models and how each one works:
-
Auto : This is GPT-5’s default model, first unveiled last week. The idea is that you let the model decide how long it should “think” based on the complexity of your query.
-
Fast : OpenAI claims that this model gives “instant answers.” While it is certainly fast, it does take some time to load, but the model does not “think” like a reasoning model.
-
Thinking mini : OpenAI claims this model thinks “fast,” and it’s true. When I tested it by asking it to “break down a complex scientific concept, like quantum mechanics or black holes, so that it’s understandable to a general audience,” ChatGPT reported that it “thought for a couple of seconds” before generating an answer.
-
Reflection : This is the most “powerful” GPT-5 model, and it spends time “reflecting” to come up with the best possible answer. When I tried this model with the prompt above, it thought for 21 seconds before responding.
Additionally, if you subscribe to the ChatGPT plan, you can click on “Legacy Models” to select GPT-4o, GPT-4.1, o3, and o4-mini.
As Sam Altman noted in his post on X, the rate limit for GPT-5 Thinking is 3,000 messages per week. If you hit that limit, you get extra power on GPT-5 Thinking mini. Altman thinks Auto is the model most users will want to use, though I’m not so sure. I’d bet most users don’t really want to worry about hitting rate limits, and Auto could help with that. But at the same time, ChatGPT users might like the extra control they have over their model. If you want ChatGPT to think longer about an answer, but the model thinks your query is simple, it might think less than you expect, and the answer you get might not be what you’re looking for.
Disclosure: Lifehacker’s parent company, Ziff Davis, filed a lawsuit against OpenAI in April, alleging that it infringed Ziff Davis’ copyrights in the training and operation of its AI systems.