Google released an explanation a week after a series of screenshots showing its artificial intelligence search engine, AI Overviews, giving false results started making the rounds on social media. The company cited “data void” and “information gap” as the reasons behind the error.
Google launched its experimental AI search feature in the US a few weeks ago, but it quickly came under fire as users posted the strange results the tool produced on social media, including instructions to eat pebbles and combine pizza cheese with glue.
In a blog post, Google refuted the claimed harmful answers on subjects including leaving dogs in automobiles and smoking while pregnant, stating that certain AI Overviews “never appeared” but also acknowledging that “some odd, inaccurate or unhelpful AI Overviews certainly did show up”. Google also denounced as “obvious” and “silly” a great deal of bogus screenshots that were circulated online.
The tech behemoth reported seeing “nonsensical new searches, seemingly aimed at producing erroneous results” and mentioned that it needs to get better at deciphering humorous content and nonsensical questions.
Giving the query “How many rocks should I eat?” from one of the popular screenshots as an example, Prior to those screenshots going viral, essentially no one queried the question, according to Google. As there is a dearth of excellent online material that thoughtfully addresses that query, Google said that there is a “data void” or “information gap” in the market. Google provided an explanation for the peculiar return the search engine produced for this specific inquiry, stating that “satirical content on this topic… that also happened to be republished on a geological software provider’s website.” Thus, when that query was entered into Search, an AI Overview that obediently connected to one of the few websites that addressed the query surfaced.”
Liz Reid, VP and Head of Google Search, also described how AI Overviews function and how they differ from chatbots and other LLM products in the blog post. She said that AI Overviews are “powered by a customized language model, which is integrated with our core web ranking systems, and are designed to carry out traditional “search” tasks, like identifying relevant, high-quality results from Google’s index.” Because of this, AI Overviews offer more than just text output; they also include pertinent links that support the findings and let users delve deeper.
“This means that AI Overviews generally don’t “hallucinate” or make things up in the ways that other LLM products might,” she noted.
When AI Overviews make a mistake, Google claims that it’s because of things like “misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available.”
The business claims to have implemented more than a dozen technical enhancements after spotting trends where Google made mistakes, including
- Google has improved its systems for identifying meaningless searches and placed restrictions on the amount of humour and satire that can be included.
- Google has made changes to its processes to restrict the usage of user-generated content in answers that can provide inaccurate guidance.
- In cases where AI Overviews were not as beneficial, Google has implemented triggering constraints.
- Hard news issues where “freshness and factuality” are critical will not have AI Overviews displayed.
In addition to these enhancements, Google reported that it had detected and addressed content policy violations on “less than one in every 7 million unique queries” that resulted in the appearance of AI Overviews.
Group Media Publication
Construction, Infrastructure and Mining
General News Platforms – IHTLive.com
Entertainment News Platforms – https://anyflix.in/
Legal and Laws News Platforms – https://legalmatters.in/
Podcast Platforms – https://anyfm.in/