Hello, A few weeks ago, we sounded the alarm about Meta's AI "Discover Feed" — a public stream where deeply personal AI-generated conversations were ending up online.1 Medical concerns, relationship confessions, work dilemmas, even people's exact locations were being broadcast to the world. And users often had no idea their private chats had gone public. The good news: Our campaign worked. More than 10,000 people signed our petition. Major outlets including Business Insider, the BBC, The Washington Post, WIRED, and TechCrunch ran investigations.2, 3, 4, 5, 6 Meta reached out to us directly, and their product team implemented real changes — including a mandatory warning screen serving as a crucial friction step to prevent accidental sharing. The disappointing news: Just days after implementing these privacy protections, Meta removed this educational friction step — a moment of clarity that helped users understand what they were sharing, where it would appear, and who could see it. This wasn’t just friction for friction’s sake — it was the right kind of friction: a simple, educational prompt that empowered people to make informed choices online. Meta proved they can build these safeguards — they just chose to remove them. And it shows the power we have when we speak up. Even if the changes didn’t last, they happened because we demanded better. Now we need to keep the pressure on to make sure Meta makes these protections permanent. If you believe technology should serve the people who use it, will you sign our petition demanding Meta restore these full privacy protections? Together, we can help users from accidentally sharing personal AI conversations on Meta’s public Discover Feed. Add your name → This was the prompt Meta briefly showed before users shared to their feed. It was a simple, effective way to help users avoid accidental oversharing — before Meta removed it. Without that positive friction step, delicate conversations, calls for help, and accidental recordings are all at risk of unintentionally ending up in Meta's public feed. Again. Weeks of media coverage have shown example after example of embarrassing, unintentionally shared posts — Meta knows they need to act. This is what defiant optimism looks like. We refuse to accept that misleading invasions of our privacy are just "the cost of innovation." When tech companies backtrack on reasonable protections that they are demonstrably able to deliver, we fight twice as hard. Our work isn't done until those protections are permanent and complete. Add your name to demand that Meta permanently protect sensitive AI conversations from being unintentionally shared with the world. Sign now → Thank you for proving that when we organize, we can make tech companies listen. When we come together to demand better, we move closer to the future of technology we know is possible. Let's keep making good. Neneh Darwin Senior Campaigner Mozilla Foundation More Information: 1. Mozilla Foundation: Meta: Help Users Stop Accidentally Sharing Private AI Conversations. May 28, 2025. 2. Business Insider: Mark Zuckerberg has created the saddest place on the internet with Meta AI's public feed. June 11, 2025. 3. WIRED: The Meta AI App Lets You ‘Discover’ People’s Bizarrely Personal Chats. June 12, 2025. 4. BBC: Meta AI searches made public - but do all its users realise? June 13, 2025. 5. Washington Post: Meta AI users confide on sex, God and Trump. Some don’t know it’s public. June 13, 2025. 6. TechCrunch: The Meta AI app is a privacy disaster. June 12, 2025. |