Hello, In a sudden wave of tech-military partnerships, Meta, Anthropic, AMS and Open AI have decided to let the US and several other governments use their AI models for warfare.1 It’s one thing if AI messes up your playlist or gives you the wrong recipe — the stakes are low. But when AI is used to make life-or-death decisions with weapons of war, the stakes couldn't be higher. Using AI models in warfare will make our world a less stable and more volatile place to live. It risks kicking off a new arms race, where countries rush to outpace each other in building catastrophic and deadly AI-powered capabilities.2 AI will accelerate the speed and scale of conflict, leading to unpredictable and uncontrollable escalations – as autonomous systems may make life-or-death decisions with limited human oversight.3 At Mozilla, we believe these AI models should never be used in warfare – not now, not ever and with no exceptions. And now, we’re asking the entire Mozilla community to join us in this call. Add your name to say ‘No AI models in warfare’ — and together, we can build a world where technology serves humanity, not conflict. Sign Now → AI has the immense power to shape our world in ways we can’t fully imagine yet — transforming how we connect, learn, and thrive. But it also has the potential to take us down a dark path. Big tech companies are increasingly deciding to take a journey down that darker path. In case you missed it, in the past few weeks, here’s what has been announced: Meta opened up their AI Llama model to be used by the U.S. government and contractors for military purposes – taking a u-turn on its original policy forbidding an use of its AI for any projects related to military, warfare, or espionage missions.4 Anthropic announced it was teaming up with Palantir and Amazon Web Services (AWS) to provide US intelligence and defence agencies access to Anthropic’s Claude family of AI models. This partnership was formed despite Anthropic having long-positioned itself as a more safety-conscious generative AI vendor.5 OpenAI quietly removed language from its usage policies prohibiting people from using its products for ‘military and warfare’ – and then just last month, struck a deal with a government contractor that helps big tech companies secure contracts with US defence departments.6 AI Now published a prominent paper pointing to the present day harms of unreliable AI weapons including current systems like Gospel, Lavender, and Where’s Daddy which have facilitated a significant civilian death toll in Gaza through the fallible collection and use of personally identifiable information.7 Building trustworthy AI is at the heart of Mozilla’s movement-building work. As a community, we have always stood up for an open internet and technology that is safe and accessible for everyone. Together, we need to move towards a world of AI that is helpful – rather than harmful – to human beings. A world where human agency is at the core of how AI models are built and this technology serves to enrich our lives. That’s why Mozilla firmly believes that AI models have no place in warfare. If you agree, can you join the campaign calling for ‘No AI models in Warfare’? AI should be used to advance humanity, not to fuel and escalate war. Add Your Name: No AI in Warfare! → Thank you for everything you do for the internet. Ayah Bdeir Senior Strategic Advisor Mozilla More information: 1. TechCrunch: Anthropic teams up with Palantir and AWS to sell AI to defense customers. 7 November 2024. 2. Business Insider: The US is in a technological arms race with China, Air Force secretary says, and AI could decide who wins. 30 October 2024. 3. For an in-depth discussion, see the recent panel discussion at Mozilla’s MozFest House: We are Life: AI Accountability During War. 13 June 2024. 4. New York Times: Meta Permits Its A.I. Models to Be Used for U.S. Military Purposes. 6 November 2024. 5. Axios: Anthropic, Palantir, Amazon team up on defense AI. 8 November 2024. 6. Forbes: OpenAI Is Going After Defense Contracts. 15 October 2024. 7. AI Now: Mind the Gap: Foundation Models and the Covert Proliferation of Military Intelligence, Surveillance, and Targeting. 18 October 2024. |