Eight Things You Should Never Share With an AI Chatbot.

It probably goes without saying, but your conversations with AI-powered chatbots are not private —anything you type or upload to Gemini, ChatGPT, and other models can be read and used in various ways. If you wouldn’t send a document or repeat information to a stranger, you shouldn’t include it in your chatbot prompt.
Stanford researchers examined the privacy policies of six US companies that developed the most popular AI-powered chatbots, including Claude, Gemini, and ChatGPT, and found that all of them, by default, use chat data for training. Some store this data indefinitely, while most combine it with other information collected from consumers, such as search queries and purchases. In most cases, you can opt out of having your data used for LLM training, but chats can also be read by human experts , and long-term retention policies increase the risk of your information being leaked in the event of a hack.
If you’re going to use an AI chatbot, here’s what you should avoid:
-
Login Credentials: Obviously, you should never insert usernames and passwords into a chatbot, including documents containing login credentials. Artificial intelligence is also poor at generating strong passwords —use your password manager’s tools instead, or, better yet, choose a password if one is available.
-
Financial data: AI chatbots are not financial experts, and you should not upload documents or use data related to your specific finances in your inquiries. This includes bank statements, credit card numbers, investment information, account numbers and balances, etc. Sharing financial data in unsecured locations increases the risk of theft, fraud, and scams.
-
Medical Records: AI chatbots are also not medical professionals and should not be relied upon to provide medical advice. You probably wouldn’t want your medical records being used for law school—plus, uploading them exposes them to potential data breaches.
-
Personal data: AI requests should never contain information such as your name, address, email, phone number, date of birth, Social Security number, passport number, or any other data that could be used to steal your identity. (Financial information and medical records are also considered sensitive personal data.)
-
General Health Information: In addition to maintaining the privacy of your sensitive medical data, you should avoid providing chatbots with seemingly innocuous health information that could be used to profile you. For example, a Stanford University report notes that AI-powered chatbots can determine your health status based on requests for heart-healthy recipes, which could eventually be made available to insurance companies. This also includes information on topics such as sexual health, medication use, and gender-affirming therapy.
-
Mental Health Issues: Your Chatbot Isn’t a Therapist. Artificial intelligence is at best useless and at worst harmful when it comes to mental health. Even with updates designed to protect users in crisis situations , chatbots are no substitute for real human support.
-
Photos: AI-powered image editing is popular, but that doesn’t mean it’s safe. You might not want your personal photos used for educational purposes, and image metadata contains information like your GPS location. At a minimum, avoid uploading images of people (especially minors) and consider removing EXIF data before publishing.
-
Company Documents: Artificial intelligence can be useful for document summaries, presentation creation, email writing, and other work tasks, but exercise caution when uploading files containing confidential company information to a chatbot. Your employer may even have a policy prohibiting this.
Ultimately, be careful about what you share with AI-powered chatbots—assume all your messages are stored and may be read by others. Avoid anything personal or personally identifiable, and enable all available privacy settings (such as opting out of data sharing and training).