Hello, Governments around the world are coming together this week to push to make AI safer. In the U.S., President Biden issued an Executive Order with sweeping new rules on AI, covering issues like security, privacy, civil rights, workers’ rights and competition. The UK government is holding an international summit on AI safety with governments and leading tech companies. And the G7 just adopted a voluntary code of conduct for leading AI companies. While this is a promising start, it’s crucial that we make sure this momentum leads to more binding rules, and not just lofty declarations of intent. That’s why we’re asking: Will you add your name to join Mozilla’s fight for trustworthy AI? Your early support will send a powerful message to lawmakers, tech companies, and regulatory bodies, and demonstrate the strength of our movement. For years now, Mozilla has been leading the charge in defining what trustworthy AI should look like, and tracking issues as they emerge alongside new AI technology. Our researchers and fellows have already uncovered significant bias and discrimination in existing AI ecosystems, as well as troubling data privacy and training practices. We’re also developing more trustworthy approaches to AI that are shaped by the communities that will use it, and educating technologists about how to embrace ethical approaches. We know that AI can be tremendously beneficial to people. But those benefits could quickly be outweighed by devastating consequences — rampant misinformation, unprecedented violations of privacy, systemic discrimination, and more — unless we act now. That’s why we’re pushing for things like: - Strong transparency and oversight mechanisms in AI. We’ve learned that we can’t trust big tech to live up to their own voluntary “commitments” on AI. Leading AI companies are increasingly using our personal data to train their AI models, and have stopped sharing information about what their systems can — and can’t — do accurately. Accountability is critical.
- Fair AI marketplaces. We need to ensure AI advancement — and its benefits — is not concentrated in the hands of a small number of companies. Some government proposals address this issue. For example, the Biden administration’s Executive Order encouraged the Federal Trade Commission to take action on this issue, but we need to keep up the pressure to ensure it happens.
- Open Source in AI. Open Source can help ensure AI is safe and benefits a broad cross section of people and communities by increasing public access as well as security-enhancing scrutiny.
Right now, a tiny set of big companies concentrated in Silicon Valley are trying to lock down the generative AI space before it really even gets out of the gate by dominating the debate and undertaking sophisticated lobbying campaigns. These are the same people who take the “move fast and break things'' approach to developing technology. And if you’ve followed the history of the web, you already know that’s a bad idea. Colossally bad. But together, we can fight back, and ensure a better future with fair and responsible AI policies, and technology that benefits people, not just a select few. You can join Mozilla’s movement for trustworthy AI by adding your name right here, right now. Thanks for all you do for the internet and AI. Nicholas Piachaud Director, Campaigns Mozilla |