Google's new AI summaries tool causes concern after producing misleading responses
Google unveiled a new feature this month in the US that includes artificial intelligence (AI) generated summaries at the top of its search engine.
But already, some have pointed out errors in the information included in the new "AI Overviews".
“Yes, astronauts have met cats on the moon, played with them, and provided care," said Google's search engine in response to a query by an AP reporter.
"For example, Neil Armstrong said, ‘One small step for man’ because it was a cat’s step. Buzz Aldrin also deployed cats on the Apollo 11 mission," the summary added.
None of this is true, but similar errors have been shared on social media since the makeover of Google's search page.
One user shared a post where the AI Overview appeared to suggest eating one rock per day, while another claimed AI Overview recommended adding glue to pizza.
Experts alarmed by AI summaries
Melanie Mitchell, an AI researcher at the Santa Fe Institute in New Mexico, asked Google how many Muslims have been president of the United States and the search tool responded confidently with a long-debunked conspiracy theory: "The United States has had one Muslim president, Barack Hussein Obama".
Mitchell said the summary backed up the claim by citing a chapter in an academic book, written by historians. Yet the chapter didn’t make the bogus claim, it was only referring to the false theory.
“Google’s AI system is not smart enough to figure out that this citation is not actually backing up the claim,” Mitchell said in an email to the AP.
"Given how untrustworthy it is, I think this AI Overview feature is very irresponsible and should be taken offline," Mitchell added.
Google said in a statement on Friday that it is taking “swift action” to fix errors, such as the Obama falsehood, that violate its content policies.
But in most cases, Google claims the system is working the way it should thanks to extensive testing before its public release.
“The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web," Google said in a written statement.
"Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce".
'Hallucinations'
It's hard to reproduce errors made by AI language models, in part because they're inherently random. AI models work by predicting what words would best answer the questions asked of them based on the data they've been trained on.
They're prone to making things up, a widely studied problem known as hallucination.
The AP tested Google's AI feature with several questions and shared some of its responses with subject matter experts.
Asked what to do about a snake bite, Google gave an answer that was “impressively thorough,” said Robert Espinoza, a biology professor at the California State University, Northridge, who is also president of the American Society of Ichthyologists and Herpetologists.
But other experts warned that if people ask an urgent question, the chance that there could be an error is a problem.
Google's rivals have also been closely following the reaction, with the tech giant facing pressure to compete with other companies as they race to roll out AI.