Anthropic updates Claude’s policy with new data training choices

The change does not apply to enterprise services, which Anthropic confirmed remain under separate contractual agreements and excluded from the new training policy.

Anthropic has introduced new options for Claude users, allowing them to decide whether conversations are retained for AI training or deleted within thirty days.

The US AI startup has announced an update to its data policy for Claude users, introducing an option to allow conversations and coding sessions to be used for training future AI models.

Anthropic stated that all Claude Free, Pro, and Max users, including those using Claude Code, will be asked to make a decision by September 28, 2025.

According to Anthropic, users who opt in will permit retention of their conversations for up to five years, with the data contributing to improvements in areas such as reasoning, coding, and analysis.

Those who choose not to participate will continue under the current policy, where conversations are deleted within thirty days unless flagged for legal or policy reasons.

The new policy does not extend to enterprise products, including Claude for Work, Claude Gov, Claude for Education, or API access through partners like Amazon Bedrock and Google Cloud Vertex AI. These remain governed by separate contractual agreements.

Anthropic noted that the choice will also apply to new users during sign-up, while existing users will be prompted through notifications to review their privacy settings.

The company emphasised that users remain in control of their data and that manually deleted conversations will not be used for training.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!