Hello, Google prides itself on the consistency and accuracy of their search engine. But its latest (currently US-only) AI-powered search feature for quick answers, AI Overviews, has been serving up some bizarre – and potentially dangerous – answers: “Cheese not sticking to pizza?” Mix about 1/8 cup of non-toxic glue to the sauce. “Is poison good for you?” Yes, poison can be good for humans in small quantities. “How many rocks should I eat?” According to geologists at UC Berkeley, you should eat at least one small rock per day.1 Google users are already reporting many other examples like these. And while many of them are funny, others could be life-threatening: People have reported that Google’s AI Overviews have told them to add more oil to a cooking fire, to clean with deadly chlorine gas and to follow false advice about life-threatening diseases.2 Google has spent decades and billions building its reputation as a source of consistent and accurate information. By prematurely rolling out a harmful AI feature that is clearly not yet ready, nor equipped to provide users with accurate and safe information, the company is risking not only its reputation – but potentially its users' lives. Tell Google to immediately turn off the AI Overview search feature until they can guarantee it no longer provides incorrect and dangerous information. Sign our Petition → Internet sleuths have tracked down some of the odd answers from Google’s AI as being from sarcastic replies on Reddit threads and in articles written by satirical outlets like The Onion 3,4. It is alarming that the tool would take them at face value and suggest them as top answers. Google’s CEO has defended the new search function, noting that it provides valuable “context” but that “there are still times it’s going to get it wrong” – and also noting that ‘‘hallucinations” are both an “unsolved problem…and an inherent feature” of AI models. 5 But these so-called “hallucinations” could have dire consequences – and are a tangible example of why AI needs to be trustworthy. At Mozilla, we have been working on and advocating for trustworthy AI for years, making sure AI products make our lives easier – instead of threatening them. Together, let’s put the pressure on Google and make sure it removes AI Overviews until the tool has been fixed. Sign Mozilla’s petition and call on Google to immediately pause their AI Overviews tool until its trustworthiness can be guaranteed. Sign Now → Thank you for everything you do for the internet. Christian Bock Head of Supporter Engagement Mozilla More Information: 1. CNET: Glue in Pizza? Eat Rocks? Google's AI Search Is Mocked for Bizarre Answers. 24 May 2024. 2. See examples of these: adding oil to extinguish a fire, advice to mix bleach and vinegar (creating a potentially lethal Chlorine gas) and blatantly incorrect health advice about smoking and cancer. 3. Daily Dot: Google’s news AI search pulls answers from decade-old Reddit troll post—suggesting glue on pizza. 23 May 2024. 4. See this post on X, comparing Google AI Overviews text to articles from The Onion. 5. The Verge: Google CEO Sundar Pichai on AI-powered search and the future of the web. 20 May 2024 |