How to Stop Anthropic From Training Its AI Models on Your Conversations

Did you know you can set up Google to filter out junk? Follow these steps to improve your search results, including adding my work on Lifehacker as a preferred source .
You should never assume that what you say to a chatbot is private . When you interact with one of these tools, the company behind it is likely collecting data from the session, often using it to train underlying AI models. Unless you’ve explicitly opted out of this practice, you’ve likely unintentionally trained a lot of models using AI.
Anthropic, the company that created Claude, took a different approach. The company’s privacy policy states that Anthropic does not collect user data to train Claude unless you tell the company or opt in to training. While that doesn’t mean Anthropic has refrained from collecting data at all, you can rest easy knowing that your conversations won’t be used for future versions of Claude.
That’s changing now. As The Verge reports , Anthropic will begin training its Claude AI models on user data. This means that new chats or coding sessions you have with Claude will be fed back to Anthropic to tweak and improve the models’ performance.
This will not affect past sessions unless you change them. However, if you resume a past chat or programming session after making changes, Anthropic will retrieve any new data generated during the session for training purposes.
This won’t just happen without your permission, at least not right away. Anthropic is giving users until September 28 to make a decision. New users will see the option when creating an account, and existing users will see a pop-up asking for permission when they sign in. But it’s reasonable to assume that some of us will be too quick to click through these menus and pop-ups and accidentally agree to data collection that we may not have intended.
To its credit, Anthropic says it does try to hide users’ sensitive data through “a combination of tools and automated processes” and doesn’t sell it to third parties. That said, I definitely don’t want my conversations with AI being used to train future models. If you feel the same way, here’s how to opt out.
How to Opt Out of Anthropic AI Training
If you’re already a Claude user, you’ll see a pop-up warning the next time you sign in to your account. This pop-up, titled “ Consumer Terms and Policies Updates, ” explains the new rules and turns on your opt-in consent by default. To opt out, make sure the toggle next to “You can help improve Claude” is turned off. (The toggle will be on the left with an X, not the right with a check mark.) Click “Accept” to confirm your choice.
If you’ve already accepted this pop-up and aren’t sure whether you’ve agreed to this data collection, you can still opt out. To check, open Claude and go to Settings > Privacy > Privacy Settings , then make sure the “Help improve Claude” toggle is off. Note that this setting will not delete any data Anthropic has collected since you opted in.