Your Conversations With AI Chatbots Are Not Private

When I was in college in 2004 (I’m old), I installed an “AI” plugin that automatically responded to incoming AIM messages (old) when I was away. This simple plugin automatically replied to messages using my chat history; if someone asked, “How are you?” the bot would respond with the answer I recently gave to this question. You probably know where this is going: It took about two days for this plugin to repeat something mean I said about a friend to that friend. I deleted it, having learned my lesson about AI privacy (and friendship).

AI has come a long way in 20 years, but privacy concerns haven’t changed: everything you say to an AI chatbot can be read and potentially repeated.

Be careful what you say to AI chatbots.

Jack Wallen, writing for ZDNet , noted that Google Gemini’s privacy statement clearly states that all information in chats with Gemini ( formerly called Bard ) is retained for three years and that people review the data regularly. The privacy document also explicitly states that you should not use this service for anything personal. I quote the terms:

Don’t enter anything you wouldn’t want the reviewer to see or use Google. For example, don’t enter information that you consider sensitive or data that you don’t want to use to improve Google’s products, services, and machine learning technologies.

This is Google explicitly saying, in plain language, that people can view your conversations and that they will be used to improve their AI products.

Does this mean that Gemini will repeat the personal information you enter into the chat window, like my crappy AIM chatbot did? No, and the page says reviewers are working to remove obviously personal information such as phone numbers and email addresses. But the ChatGPT leak late last year , in which a security researcher managed to gain access to training information, shows that anything a large language model has access to can—at least in theory—eventually leak.

And this is all assuming that the companies running your chatbots are at least trying to be trustworthy. Both Google and OpenAI have clear privacy policies stating that they do not sell personal information. But Thomas Germain, writing for Gizmodo , reported that AI “girlfriends” encourage users to share personal information and then actively sell it. From the article:

You’ve heard stories of data problems before, but AI girlfriends are violating your privacy in “disturbing new ways,” according to Mozilla. For example, CrushOn.AI collects detailed information, including information on sexual health, medication use, and gender-affirming care. 90% of apps may sell or share user data for targeted advertising and other purposes, and more than half won’t let you delete the data they collect.

So not only could your chat data be leaked, but some AI companies are actively collecting and selling personal information.

Bottom line: Basically, you should never talk about anything personal using any larger language model. This means the obvious things like social security numbers, phone numbers and addresses, but it extends to everything you wouldn’t want to end up leaking. These apps are simply not designed for sensitive information.

More…

Leave a Reply