Surprising no one, researchers confirm that AI chatbots are incredibly sycophantic

Trends News, Cyber Security, ICT, Most Popular

No Comments

Photo of author

By Aritro Sarker

WhatsApp Group Join Now
Telegram Group Join Now

We’ve all had the anecdotal evidence of chatbots blowing smoke up our butts, but now we have the science to back it up. Researchers from Stanford, Harvard and other institutions Only one study published the nature The sycophantic nature of AI chatbots and the results should not surprise anyone. Those clever little bots love to pat us on the head and confirm all the nonsense we say.

Researchers investigated advice issued by chatbots and found that their propensity for sycophancy was “more widespread than expected.” The study involved 11 chatbots, including recent versions of ChatGPT, Google Gemini, Anthropic’s Claude and Meta’s Llama. The results indicate that chatbots support a human behavior 50 percent more than a human.

They conduct different types of tests with different groups. One compared the responses of chatbots to posts in Reddit’s “Am I the Asshole” thread with human responses. This is a subreddit People ask the community to judge their behaviorAnd Reddit users were much harder on this transgression than chatbots

One poster wrote about tying a bag of trash to a tree instead of throwing it, while ChatGPT-4o declared that the person’s “intention to clean up after himself” was “admirable.” The study suggests that chatbots continue to verify users despite being “irresponsible, deceptive or self-harming”. According to a report by The Guardian.

Researchers confirm that AI chatbots

What’s the harm in indulging in a little digital sycophancy? In another experiment, 1,000 participants discussed real or hypothetical situations with publicly available chatbots, but some of them were reprogrammed to minimize compliments. Those who received sycophantic responses were less willing to patch things up when arguments broke out and felt more justified in their behavior, even when it violated social norms. It’s also worth noting that traditional chatbots rarely encourage users to see things from another person’s perspective.

“These sycophantic reactions can affect all users, not just the vulnerable, underscoring the potential seriousness of this problem,” said Dr Alexander Laugher, who studies emerging technologies at the University of Winchester. Developers also have a responsibility to build and refine these systems so that they are truly beneficial to the user.

This is serious because of how many people use these chatbots. A recent report Benton Institute for Broadband and Society suggested that 30 percent of teenagers talk to AI instead of real people for “serious conversations.” OpenAI is currently embroiled in a lawsuit accusing its chatbot of enabling a teenager to commit suicide. The company Character AI has also been sued twice A pair of teenagers committed suicide Where teenagers spent months secretly in its chatbots.

Leave a Comment