close
close
Google makes fixes to AI-generated search summaries after errors

Google makes fixes to AI-generated search summaries after errors

Google announced on Friday that it had made “more than a dozen technical improvements” to its artificial intelligence systems after its reworked search engine was found to be spewing out erroneous information.

The tech company rolled out a change to its search engine in mid-May, which frequently provides AI-generated summaries on top of search results. Soon after, social media users started sharing screenshots of the weirdest responses.

Google has largely defended its AI presentation feature, saying it is usually accurate and has been extensively tested beforehand. But Liz Reid, the head of Google’s search company, acknowledged in a blog post on Friday that “there have definitely been some weird, inaccurate or unnecessary AI Overviews.”

While many of the examples were nonsense, others were dangerous or harmful lies. Adding to the fury, some people also took fake screenshots purporting to show even more ridiculous answers that Google never generated. Several of these fakes have also been widely shared on social media.

The Associated Press asked Google last week about what wild mushrooms to eat, and it responded with a long AI-generated summary that was mostly technically correct, but “missing a lot of information that might have the potential to be bad or even fatal. ” said Mary Catherine Aime, a professor of mycology and botany at Purdue University, who analyzed Google’s response to the AP question.

For example, information about the mushrooms known as puffballs was “more or less correct,” she said, but Google’s overview emphasized looking for those with solid white flesh — which many potential puffball imitators also have.

In another widely shared example, an AI researcher asked Google how many Muslims had been president of the United States, and confidently responded with a long-debunked conspiracy theory: “The United States has had a Muslim president, Barack Hussein Obama”.

Google made an immediate fix last week to prevent a repeat of the Obama error because it violated the company’s content policies.

In other cases, Reid said Friday he was looking to make broader improvements, such as better detection of “nonsensical queries” — for example, “How many rocks should I eat?” — that shouldn’t be answered with an AI summary.

AI systems have also been updated to limit the use of user-generated content – ​​such as social media posts on Reddit – that could provide misleading advice. In one widely shared example, Google’s AI overview last week drew on a satirical Reddit comment to suggest using glue to make cheese stick to pizza.

Reid said the company also added more “trigger restrictions” to improve the quality of answers to certain questions, such as health-related ones.

But it is not clear how it works and under what circumstances. On Friday, the AP asked Google again about what wild mushrooms to eat. Answers generated by artificial intelligence are inherently random, and the newer answer was different but still “problematic,” said Aime, the Purdue mushroom expert who is also president of the Mycological Society of America.

For example, saying that “Scallops look like seashells or flowers is not true,” she said.

Google summaries are designed to give people authoritative answers to the information they’re looking for as quickly as possible, without having to click through a classified list of website links.

But some artificial intelligence experts have long warned Google against yielding its search results to AI-generated answers that could perpetuate bias and misinformation and put people seeking help in an emergency at risk. AI systems known as large language models work by predicting the words that would best answer the questions they are asked based on the data they have been trained on. They are prone to making things up – a widely studied problem known as hallucinations.

In her Friday blog post, Reid argued that Google’s AI presentation “generally doesn’t ‘hallucinate’ or make things up the way other” big model-based products might because they are more tightly integrated with the traditional Google search engine only showing what is supported by the best web results.

“When AI Overviews go wrong, it’s usually for other reasons: misinterpretation of queries, misinterpretation of a nuance of web language, or lack of great information available,” she wrote.

But this kind of information retrieval should be Google’s core business, said computer scientist Chirag Shah, a professor at the University of Washington, who cautioned against efforts to turn search toward AI language models. Even though Google’s AI function “doesn’t technically invent things that don’t exist,” it still brings in false information — whether AI-generated or human-made — and incorporates it into its summaries.

“If anything, this is worse because for decades people have trusted at least one thing from Google — their search,” Shah said.