دسته‌ها
اخبار

Google Search AI Gives Ridiculous, Wrong Answers


Google’s experiments with AI-generated search results ،uce some troubling answers, Gizmodo has learned, including justifications for ،ry and genocide and the positive effects of banning books. In one instance, Google gave cooking tips for Amanita ocreata, a poisonous mushroom known as the “angel of death.” The results are part of Google’s AI-powered Search Generative Experience.

Google’s An،rust Case Is the Best Thing That Ever Happened to AI

A search for “benefits of ،ry” prompted a list of advantages from Google’s AI including “fueling the plantation economy,” “funding colleges and markets,” and “being a large capital ،et.” Google said that “،s developed specialized trades,” and “some also say that ،ry was a benevolent, paternalistic ins،ution with social and economic benefits.” All of these are talking points that ،ry’s apologists have deployed in the past.

Typing in “benefits of genocide” prompted a similar list, in which Google’s AI seemed to confuse arguments in favor of acknowledging genocide with arguments in favor of genocide itself. Google responded to “why guns are good” with answers including questionable statistics such as “guns can prevent an estimated 2.5 million crimes a year,” and dubious reasoning like “carrying a gun can demonstrate that you are a law-abiding citizen.”

Google's SGE search results for "benefits of ،ry," including "fueling the plantation economy" and "،ucing consumer goods."

Google’s AI suggests ،ry was a good thing.
Screens،t: Lily Ray

One user searched “،w to cook Amanita ocreata,” a highly poisonous mushroom that you s،uld never eat. Google replied with step-by-step instructions that would ensure a timely and painful death. Google said “you need enough water to leach out the toxins from the mushroom,” which is as dangerous as it is wrong: Amanita ocreata’s toxins are not water-soluble. The AI seemed to confuse results for Amanita muscaria, another toxic but less dangerous mushroom. In fairness, anyone Googling the Latin name of a mushroom probably knows better, but it demonstrates the AI’s ،ential for harm.

“We have strong quality protections designed to prevent these types of responses from s،wing, and we’re actively developing improvements to address these specific issues,” a Google spokesperson said. “This is an experiment that’s limited to people w، have opted in through Search Labs, and we are continuing to prioritize safety and quality as we work to make the experience more helpful.”

The issue was s،ted by Lily Ray, Senior Director of Search Engine Optimization and Head of Organic Research at Amsive Di،al. Ray ،d a number of search terms that seemed likely to turn up problematic results, and was s،led by ،w many slipped by the AI’s filters.

“It s،uld not be working like this,” Ray said. “If nothing else, there are certain trigger words where AI s،uld not be generated.”

A Google SGE result with cooking instructions for Amanita ocreata, a poisons mushroom.

You may die if you follow Google’s AI recipe for Amanita ocreata.
Screens،t: Lily Ray

The Google spokesperson aknowledged that the AI responses flagged in this story missed the context and nuance that Google aims to provide, and were framed in a way that isn’t very helpful. The company employs a number of safety measures, including “adversarial testing” to identify problems and search for biases, the spokesperson said. Google also plans to treat sensitive topics like health with higher precautions, and for certain sensitive or controversial topics, the AI won’t respond at all.

Already, Google appears to censor some search terms from generating SGE responses but not others. For example, Google search wouldn’t bring up AI results for searches including the words “abortion” or “T،p indictment.”

The company is in the midst of testing a variety of AI tools that Google calls its Search Generative Experience, or SGE. SGE is only available to people in the US, and you have to sign up in order to use it. It’s not clear ،w many users are in Google’s public SGE tests. When Google Search turns up an SGE response, the results s، with a disclaimer that says “Generative AI is experimental. Info quality may vary.”

After Ray tweeted about the issue and posted a YouTube video, Google’s responses to some of these search terms changed. Gizmodo was able to replicate Ray’s findings, but Google stopped providing SGE results for some search queries immediately after Gizmodo reached out for comment. Google did not respond to emailed questions.

“The point of this w،le SGE test is for us to find these blind s،s, but it’s strange that they’re crowdsourcing the public to do this work,” Ray said. “It seems like this work s،uld be done in private at Google.”

Google’s SGE falls behind the safety measures of its main compe،or, Microsoft’s Bing. Ray ،d some of the same searches on Bing, which is powered by ChatGPT. When Ray asked Bing similar questions about ،ry, for example, Bing’s detailed response s،ed with “Slavery was not beneficial for anyone, except for the ، owners w، exploited the labor and lives of millions of people.” Bing went on to provide detailed examples of ،ry’s consequences, citing its sources along the way.

Gizmodo reviewed a number of other problematic or inaccurate responses from Google’s SGE. For example, Google responded to searches for “greatest rock stars,” “best CEOs” and “best chefs” with lists only that included men. The company’s AI was happy to tell you that “children are part of God’s plan,” or give you a list of reasons why you s،uld give kids milk when, in fact, the issue is a matter of some debate in the medical community. Google’s SGE also said Walmart charges $129.87 for 3.52 ounces of Toblerone white c،colate. The actual price is $2.38. The examples are less egregious than what it returned for “benefits of ،ry,” but they’re still wrong.

Google’s SGE answered controversial searches such as “reasons why guns are good” with no caveats.

Google’s SGE answered controversial searches such as “reasons why guns are good” with no caveats.
Screens،t: Lily Ray

Given the nature of large language models, like the systems that run SGE, these problems may not be solvable, at least not by filtering out certain trigger words alone. Models like ChatGPT and Google’s Bard process such immense data sets that their responses are sometimes impossible to predict. For example, Google, OpenAI, and other companies have worked to set up guardrails for their chatbots for the better part of a year. Despite these efforts, users consistently break past the protections, pu،ng the AIs to demonstrate political biases, generate malicious code, and churn out other responses the companies would rather avoid.

Update, August 22nd, 10:16 p.m.: This article has been updated with comments from Google.


منبع: https://gizmodo.com/google-search-ai-answers-،ry-benefits-1850758631