Here’s Why ChatGPT Now Reminds You to Take a Break

If you frequently chat with ChatGPT, you may have been surprised by a new pop-up this week. After a long conversation, a “Just checking in” pop-up may appear with the message, “You’ve been chatting for a while now — is it time to take a break?”

The pop-up prompts you to choose to “Continue Chatting” or even “This was helpful.” Depending on your perspective, you might see this as a good reminder to put the app down for a while, or as a condescending remark that you don’t know how to limit your time with the chatbot.

Don’t take it personally – it may seem like OpenAI cares about your usage habits with this pop-up, but the real reason for the change is a little darker.

You may also like

Addicted to ChatGPT

This new usage reminder comes as part of a larger announcement from OpenAI on Monday titled “ Why We’re Optimizing ChatGPT .” In the post, the company says it values how you use ChatGPT, and that while it wants you to use the service, it also sees value in using it less often . This is partly thanks to features like the ChatGPT Agent , which can take actions on your behalf and makes your time spent in ChatGPT more efficient and productive.

This is all well and good, of course: if OpenAI wants to make ChatGPT conversations just as useful to users, albeit at a fraction of the cost, then so be it. But this isn’t just a desire for users to communicate with ChatGPT faster; rather, it’s a direct response to how addictive ChatGPT can be, especially for those who rely on the chatbot for psychological or emotional support.

You don’t have to read between the lines, either. To OpenAI’s credit, the company is directly addressing serious issues that some chatbot users have had in recent years, including an update earlier this year that made ChatGPT too nice . Chatbots are typically enthusiastic and friendly , but the 4o model update went too far. ChatGPT would confirm that all your ideas—good, bad, or ugly—were correct. In the worst cases, the bot ignored signs of misconception and directly fed into these users’ distorted viewpoints.

OpenAI directly acknowledges that this has happened, though the company says such cases are “rare.” Still, the company is tackling the problem head-on: In addition to reminding people to take a break from using ChatGPT, the company says it’s refining its models to spot signs of stress and avoid answering tough questions like “Should I break up with my partner?” OpenAI says it’s even partnering with experts, doctors, and health professionals in various ways to achieve this goal.

We should all use a little less AI

It’s certainly good that OpenAI wants you to use ChatGPT less, and that they’re actively acknowledging their problems and working to solve them. But I don’t think relying on OpenAI alone is enough here. What’s good for the company won’t always be good for you . And I think we could all benefit from moving away from generative AI.

What do you think at the moment?

As more people turn to chatbots for help with work, relationships, or mental health, it’s important to remember that these tools aren’t perfect, or even fully understood. As we saw with GPT-4o, AI models can be flawed and encourage dangerous ways of thinking. AI models can also hallucinate , or in other words, completely make things up. You may think the information a chatbot gives you is 100% accurate, but it could be full of errors or outright lies — how often do you fact-check your conversations?

Trusting AI with your personal thoughts also puts your privacy at risk, as companies like OpenAI store your chats and don’t provide any of the legal protections that a licensed medical professional or legal representative would. Additionally, new research shows that the more we rely on AI, the less we rely on our own critical thinking. While the claim that AI is making us “dumber” may be an exaggeration, I’m concerned about the amount of brainpower we’re outsourcing to these new bots.

Chatbots aren’t licensed therapists; they’re prone to making things up; they have little privacy protection; and they can even trigger delusional thinking. It’s great that OpenAI wants you to use ChatGPT less, but maybe we should use these tools even less.

Disclosure: Lifehacker’s parent company, Ziff Davis, filed a lawsuit against OpenAI in April, alleging that it infringed Ziff Davis’ copyrights in the training and operation of its AI systems.

More…

Leave a Reply