Nate:
Read the terms of service. Don’t make assumptions. Don’t pick defaults.
Yesterday, Anthropic quietly flipped a switch. If you’re a Claude user, your conversations are now training data unless you actively say no. Not when you give feedback. Not when you explicitly consent. By default, from day one.
Here’s what changed: Previously, Claude didn’t train on consumer chat data without your explicit thumbs up or down. Clean, simple, respectful. Now? Everything you type becomes model training fodder unless you navigate to settings and opt out. And if you don’t opt out, they keep your data for up to five years.
I’m not here to pile on Anthropic. The reaction across Reddit and tech forums has been predictably negative – privacy advocates are disappointed, users feel blindsided, and everyone’s comparing this to the same moves OpenAI and others have made. What I want to talk about is something more fundamental: this is exactly why you can’t get comfortable with defaults in AI.
Think about it. You pay for Claude Max, you develop workflows, you integrate it into your thinking process. You are human and you like stability and you naturally assume the deal you signed up for is the deal you’ll keep. But the ground shifts. Yesterday’s privacy-first approach becomes today’s opt-out system. Tomorrow? Who knows what changes.