Chatbots Can Transmit Your Phone Number.

When communicating with a chatbot like ChatGPT, you should never assume your conversations are private . Many chatbots use your conversations by default to train their underlying AI models, but even if you opt out of training or use a temporary chat, these conversations are often stored on the company’s servers for a limited time. A general rule is to avoid providing the chatbot with information you wouldn’t want made public (confidential company information, personal secrets, etc.). But what if the chatbot already has your personal information? What if ChatGPT, Gemini, or Claude are happy to share your phone number with anyone who asks?
This week, I came across a discussion posted by Eileen Goh of MIT Technology Review . In her article, Goh examines several user claims of chatbots sharing personal information, such as phone numbers, upon request. In some cases, the chatbots provided the information at the user’s request; in other cases, strangers asked for details. For example, an Israeli programmer received a message from an unknown contact via WhatsApp asking for help with a payment app. When the engineer asked how the stranger had obtained his WhatsApp data, the stranger sent a screenshot showing Gemini sharing the information upon request. The engineer later found the only source online containing his phone number: a Quora post from 2015.
How do chatbots access our personal information?
Chatbots like ChatGPT are trained on massive data sets. Much of this data, of course, comes from the internet. Therefore, it’s entirely possible that websites containing your personal information—for example, a random forum post from ten years ago—could have found their way into the chatbot’s dataset and been returned as part of a request for your information. Even if it wasn’t part of the training data, chatbots have had internet search capabilities for years. These models can crawl vast numbers of websites to retrieve results for a query, and if they find your information, they may well share it.
A deeper problem is that our information is everywhere online, whether we know it or not. Our personal contact information can be found on websites we might not even remember posting to; on city and town websites, our personal information can be linked to public records, even though these results don’t typically appear at the top of Google search results. However, because artificial intelligence is capable of conducting deep analysis of all these web results, it can uncover obscure results and bring them to the surface, potentially revealing your data.
As Go explains, most chatbots have security mechanisms to prevent harm—or perhaps too much harm. I experienced this firsthand when I asked ChatGPT for my phone number. It replied that it couldn’t share private information of individuals, as it violated its security measures. However, it did find two phone numbers for “Jake Peterson” that were “publicly available,” possibly listed openly on the websites of individual corporations. (Incidentally, neither result matched my phone number.)
But these safeguards are far from perfect. Goh cites an example of a University of Washington graduate student searching for his friend’s contact information on Gemini. The bot returned his friend’s search results, along with her phone number. The friend later confirmed that she had shared her phone number online as part of a technology workshop but never intended to make it public. (Gemini couldn’t find or wouldn’t share my personal contact information, but was happy to provide access to my X account.)
Can I remove my phone number from chatbot datasets?
Unfortunately, we don’t have many good options for protecting our privacy from chatbots. To their credit , OpenAI has a portal that allows you to request removal of your personal information from responses, but as Go notes, the company reserves the right to deny your request for various reasons. Anthropic only has a support document explaining how it uses your information , and Google allows you to request an opt-out , but only depending on your jurisdiction. (The company specifically mentions the EU and UK, based on their data protection laws.)
Perhaps the most realistic approach is to remove this information from the public internet as much as possible. If you live in California, you can use this portal to request that data brokers remove your information from their databases. You can also consider a variety of personal data removal tools , such as Incogni or DeleteMe , to try to achieve the same. However, while they may remove your information from some corners of the internet, there’s little you can do if AI companies already have your information in their datasets.
The sad reality is that artificial intelligence technologies have outpaced privacy regulations. If lawmakers took steps to ensure we all could opt out of this data collection, we might be able to nip the problem in the bud. But for now, the best we can do is request that our information be deleted and not used, and if the situation becomes too dire, to change our contact information entirely.