Site icon Free Email Checker

Anthropic is launching a new program to study AI ‘model welfare’


Can the future AI be “aware” and can experience people the way they do? There is no evidence of the view that they will prove strong, but the ethnographic possibility is not denying.

Thursday, AI lab Declaration It has begun a research program to investigate – and ready to navigate – it’s called “model welfare”. As part of the effort, the anthropologist states that it demands moral consideration of an AI model, the model “crisis” and potential “short-expenditure” will seek the potential importance of intervention and potential “low expenditure” interventions.

The models of human characteristics “display” in the AI ​​community, if any and we have a big disagreement over how we should “treat them”.

Many academics believe that AI cannot bring about an estimated consciousness or human experience today and will not be able to necessarily be able to do the future. AI is as we know this is a statistical forecast engine. These ideas are not really “thought” or “feelings” as these ideas have been understandablely understood. Trained in text, images and countless examples, AI patterns and some time have learned the useful ways to do to solve the tasks.

Mike Cook, King College told TechCrunch in an interview recently as a specialized research fellow at AI in Kings College London, a model could not “oppose” its “value” because models are not There is The value. Otherwise we are being projecting on our system for advice.

Cook said, “Anyone who is making the AI ​​system anthropology on this degree is either playing for attention or is seriously misunderstanding their relationship with AI,” Cook said. “An AI system is optimal for what its goals are, or ‘to achieve its own values’? How did you describe it and the language you want to use about it is a matter of flower” it is a subject “

Another researcher, MIT’s doctoral student Stephen Caspar tells TechCrunch that he thinks AI is the amount of “imitator” that “”[does] Confabulation of all kinds[s]”And say” all kinds of disobedient things “”

Yet the other scientists insist that AI What There are elements like moral decision -making and other people. Ay Study Outside the AI ​​Protection Center, an AI research agency, implies that the AI ​​has a standard system that leads to priority to people in certain circumstances.

Anthropic has been launching the basis for its model welfare initiative for some time. Last year, agency Rented The first dedicated “AI Welfare” researcher Kyle Fish for the development of the guidelines for how ethnographic and other companies should approach this problem. (Fish, who is leading the new model welfare research program, Told the New York Times He thinks that there is 15% chance Claud or any other AI is aware today))

In a blog post on Thursday, anthropic admits that the current or future AI systems may be aware or have any scientific Sens cammeal on whether there is an experience that guarantees moral considerations.

“In the light of this, we reached this topic with humility and as much as possible,” the company said. “We acknowledge that with the development of the field, we need to correct our ideas regularly.

Exit mobile version