Has ChatGPT Really Started Communicating With Users?

People have been predicting and fearing the possibility of AI takeover for decades—long before ChatGPT became a household name. But even though some tech companies appear to be working on AGI (artificial general intelligence), no consumer-facing products on the market have crossed that threshold yet, so it may never happen—even if ChatGPT seems to start. conversations with some users.

On Sunday , one Redditor took to r/ChatGPT to share a strange experience : ChatGPT initiated a conversation with the Redditor on its own, without asking first. The bot started the chat with a message that read: “How was your first week of high school? Have you settled in well? The Redditor responded by asking if ChatGPT sent them the message first. The bot confirmed: “Yes, I did! I just wanted to check in and see how your first week of high school was going. If you’d rather start the conversation yourself, just let me know!”

Obviously, at first glance, this is alarming . The idea of ​​an artificially intelligent bot – ChatGPT, no less – reaching out to users on its own doesn’t sit well with those of us who care at all about AI self-awareness. Sure, ChatGPT asked about the Redditor’s first day of school out of politeness, but I don’t need polite chatbots: I need them to stay on track, please and thank you.

The Redditor says they noticed the message when opening a conversation with ChatGPT, so the bot didn’t send them a notification unless prompted. Other Reddit users also claimed in the comments that the same thing happened to them. In a similar case, a user told ChatGPT about some health symptoms and the bot asked how he was feeling a week later. This post also went viral just days after OpenAI began rolling out o1 , a new model based on deeper thought processes and reasoning. Have a good time, guys.

Personally, my first reaction was that the post was fake. It would be easy enough to photoshop a screenshot of this conversation, post it on Reddit, and make it go viral, fueled by people’s interest and fears of AGI. The Redditor shared a link to an OpenAI thread , but even that may not be a real test. In a post on X, AI developer Benjamin De Cracker demonstrates how this conversation could be manipulated: “You can instruct ChatGPT to respond with a specific question as soon as you send the first message. You then delete your message, causing ChatGPT to move to the top of the chat. When you share a link, it appears that ChatGPT sent you a message without asking.

While there are many reasons to believe that this did not actually happen, it is clear that it did happen, but not in the way you think. On Monday, OpenAI told Futurism that it had fixed a bug that caused ChatGPT to initiate conversations with users. The problem occurred whenever the model tried to respond to a message that was not sent properly and was shown as blank. According to the company, ChatGPT will compensate for this by either sending a random message or retrieving it from memory .

So, what probably happened in this case is that the Redditor opened a new chat and either caused an error that sent an empty message, or accidentally sent an empty message themselves. ChatGPT accessed his memory and, based on the fact that he knew the Redditor was going to school, responded with something he thought would be appropriate. I couldn’t find a comment from a Redditor about whether they have memory enabled for their ChatGPT account, but it’s safe to say (for now) that ChatGPT has not gained consciousness and is randomly accessing users.

More…

Leave a Reply