Google’s AI Overviews Explain Made-Up Idioms With Confident Nonsense

Cyber Security, ICT, Most Popular, Trends News

No Comments

Photo of author

By Karla T Vasquez

WhatsApp Group Join Now
Telegram Group Join Now


Language may seem almost infinitely complex, jokes inside and Idiums sometimes keep money for a little one and the rest of us seem meaningless to the rest. Thanks to the generator AI, even the meaningless money received this week is the internet Google searching AI overviews of the AI ​​overviews of the internet because Brook Trout has never been pronounced before defining the phrase.

What, you never heard the word “Brook Trout”? Of course, I just made it, but the results of Google’s AI overview told me that it was a conversation way to “explode or become a quick sensation,” probably refers to the impressive colors and signs of the fish. No, it cannot be understood.

AI Atlas

This trend may begin in threads, where author and screenwriter Meghan Wilson Anastasios Have shared what happened When he searched the “peanut butter platform heel”. Google returned a result by referring to a (not real) scientific test where peanut butter was used to make diamond under high pressure.

It has gone to other social media sites like Blocky, where people shared Google’s explanation “You can’t lick the badger twice.” Games: At the end of the nonsense phrases with “meaning”, search a novel.

Things have been turned from there.

Sharon Sue @Doodlairos.com's screenshot of a bluesky post that says "Wait it is amazing" Including screenshots of a Google search "You cannot engrave a pretzel with good intentions." Google AI Overview says: quote "You can't engrave a pretzel with good intentions" A proverb that highlights that even with the best objectives, the end result can be unwanted or even negative, especially in situations associated with complex or fine tasks. Pretzel, with its curved and potentially complex shapes, presents an act that requires not just good will, but also accurateness and skills. Here is a breakdown of this statement: "Engrave a pretzel": It refers to the task of creating or shapeing a pretzel, an act that requires carefully handling and techniques.

Screenshot by Zone Reed/CNET

Livia Garceon @Livia Jailer.Bsky.Sokiyal a Blizzsky Post which says "Just amazing" And Google Search has a screenshot of AI Overview which says "Idol "You can't catch a camel in London" A ridiculous way to say something that something is impossible or extremely difficult to achieve. This is a comparison, which implies that catching a camel and trying to transport it to London is so irrational or irrational that it is metaphorical for something almost impossible or meaningless.

Screenshot by Zone Reed/CNET

This meme is more interesting for comic relief. It shows how big language models can answer Word Correct, not one Is Correct.

“These inputs are completely unreasonable but designed to create fluent, praiseworthy-sounding reactions,” Yafang LiAssistant Professor of Fogelman College of Business and Economics at Memphis University. “They are not trained to verify the truth. They are trained to finish the punishment.”

Pizza is like adhesive

Google’s AI overview brings back memories of all true stories that give incredible answers to basic questions-such as it suggests to keep the cheese stick adhesive to help the cheese stick.

This trend seems to be at least somewhat innocent because it does not focus on functioning advice. I mean, I hope that one does not try to lick the badger once, twice less. But the problem behind it is the same – a large language model like Google’s Gemini on the back of AI overview tries to answer your question and provides a possible response. Even if it gives you it is bad.

A Google spokesman said that AI Overviews are designed to display supported by top web results and have comparable accuracy rates with other search features.

“When people search for ‘false or’ false basis’, our systems will try to find the most relevant results based on limited web materials available,” Google spokesperson said. “It is true in the overall search and in some cases AI overviews will also triggers in an attempt to supply a supportive context.”

This special case is a “data void”, where a lot of relevant information is not available for the search Query. The spokesman said that Google is working to restrict the AI ​​overviews without adequate information and prevents the supply of misleading, sarcastic or unhealthy materials when Google is working. Google AI overviews use this national question information to better understand when to attend and not present.

If you ask the meaning of a fake phrase, you will not always get a make-up definition. When drafting the title of this category, I searched for “pizza money like that” and it does not trigger the AI ​​overview.

The problem does not seem universal across the LLM. I asked the chatzipt for the meaning of “you can’t lick the badger twice” and it told me “is not a standard idol, but it is definitely Word The kind of worrying, rustic proverb is like someone who can use “” Although it tried to give a definition in any way, basically: “If you do something reckless or once you persuade the danger, you can’t live again.”

Read more: AI Requirements: According to our experts, General AI is the 27 ways to work for you

Pulling money from somewhere

This phenomenon is an entertaining example of the tendency of LLMS stuffed up – AI World is called “hallucinating”. When a gene is the AI ​​model hallucinet, it creates information that seems to be commendable or accurate but in reality it is not original.

The LLM is “not a fact generator,” Lee says they simply predict the next logical bits of the language on the basis of their training.

In a recent survey, most AI researchers say they suspect that AI’s accuracy and credibility problems will be resolved soon.

Fake definitions shows not just incomplete Confident LLMS’s imperfections. When you ask a person for the meaning of phrases like “You can’t get a turkey from Cybertrock”, you probably hope them they didn’t hear it and it didn’t mean it. LLM often reacts with the same confidence as if you are asking the definition of a real idol.

In this case, Google says that this phrase means Tesla’s cybertrack “Thanksgiving turkey or other similar items are not designed or capable” and “its distinctive, future design that is not suitable for carrying huge products” highlights. “Burned

There is an ominous lesson in this ridiculous tendency: Don’t believe everything you see from the chatboat. It is making stuff out of the thin air and it will not indicate it uncertain.

Lee says, “It’s a perfect moment to use these situations for educators and researchers.” Users should always be skeptical and verify the claims “

Be careful about what you search

Since you can’t trust any LLM on your behalf to be suspicious, you need to encourage you to take what you are saying with salt.

Lee says, “When users enter the prompt, the model only assumes it valid and then goes ahead to generate the most correct answer for it,” Lee said.

The solution is to introduce skepticism in your prompt. Do not ask the meaning of unfamiliar phrases or ideas. Ask if it is real. Lee advised you to ask “Is this a true idol?”

“It could help identify the phrase instead of just guessing the model,” he said.



Leave a Comment