Skip to main content

Is my data used for model training?

Updated this week

This article is about our consumer products (e.g. Claude Free, Claude Pro). For our commercial products (e.g. Claude for Work, Anthropic API), see here.

We will not use your Inputs or Outputs to train our generative models (i.e. Claude), unless you’ve explicitly reported the materials to us (for example via our feedback mechanisms as noted below) or you’ve explicitly opted in to training (for example by joining our trusted tester program).

If your conversations are flagged for violating our Usage Policy, we may use or analyze them to improve our ability to detect and enforce against harmful activity on our platform, including training internal-only models used in our safety systems.

Feedback

When you send us feedback, we will store the entire related conversation, including any content, custom styles or conversation preferences, in our secured back-end for up to 10 years as part of your feedback. Feedback data does not include raw content from integrations (e.g. Google Drive), though data may be included if it’s directly copied into your conversation with Claude.

We de-link your feedback from your user ID (e.g. email address) before it’s used by Anthropic. We may use your feedback to analyze the effectiveness of our Services, conduct research, study user behavior, and train our AI models as permitted under applicable laws. We do not combine your feedback with your other conversations with Claude.

Here’s an example of what you’ll see when using the thumbs up/thumbs down icons to start a feedback report:

Did this answer your question?