A
What happened
Anthropic has announced a significant update to its privacy policy, allowing the company to use conversations from its Claude chatbot as training data for future models unless users explicitly opt out. This change, effective from October 8, aims to improve the chatbot's performance by leveraging real-world interactions to refine its responses. Users will encounter a prompt during sign-up or may see a pop-up if they are existing users, with the default setting opting them into data usage. Additionally, the retention period for user data has been extended from 30 days to five years, regardless of whether users consent to model training. This policy shift positions Anthropic alongside other major AI companies that already utilize user data for model training, highlighting a growing trend in the industry.
★
Key insights
-
1
User Data Utilization
Anthropic will use user conversations for model training unless opted out.
-
2
Increased Data Retention
User data retention period extended from 30 days to five years.
-
3
Default Opt-In Setting
Users are automatically opted in to data usage unless they change settings.
Takeaways
The update reflects a broader trend in AI development towards utilizing user data for model improvement.