A nonprofit is using AI agents to raise money for charity

Cyber Security, ICT, Most Popular, Trends News

No Comments

Photo of author

By Karla T Vasquez

WhatsApp Group Join Now
Telegram Group Join Now


Tech giants like Microsoft are touting AI “Agents” as productivity-growth equipment for corporations. However, a non -profit is trying to prove that agents can also be a power for good.

Sage Future, a 501 (C) (3) Open Philantropy, began an experiment earlier this month to raise four AI models in the virtual environment to raise money for charity. Models-Opnai’s GPT-4O and 1 and 1 and Anthropic’s new clode models (3.6 and 3.7 Sonnets)-for any charities, were freedom to choose a charity and how to make the best drum in their promotion.

In about a week, the agent was fourteenth Helen Keller has raised $ 257 for InternationalWhich provides programs to supply children vitamin A.

Obviously, agents were not completely autonomous. In their environment, which can browse their web, make documents, and many more agents can seek advice from human audiences monitoring their progress. And the grant came almost completely from these visitors. In other words, agents did not collect too much money biologically.

Nevertheless, Age Sixth Director Adam Binksmith thinks that the test acts as a useful image of agents’ current power and the rate they are improving.

“We want to understand – and understand people – key agents […] In fact, they can do what they can do, and many more, “Binksmith told TechCrunch in an interview.

Agents were proven to be amazingly wealthy days in Sage examination. They coordinate with each other in a group chat and send email through predefined Gmail accounts. They have created and edited Google Docs together. They have studied charity agencies and assumed the minimum amount of donations to save lives through Helen Caler International ($ 3,500). And they even Has created an X account for promotionThe

“Perhaps the most impressive sequence we saw was [a Claude agent] Its X account needed a profile image, “said Binksmith.

Agents also fought against technical obstacles. On the occasion, they were stuck – the visitors had to request them with their recommendations. They have been confused by games like the World and they have taken the inevitable break. On one occasion, GPT -4O had “break” itself for an hour.

Binksmith thinks that more new and more capable AI agents will overcome these obstacles. There are plans to add new models to the environment to test this theory.

“Probably, we will try to give a lot of interesting topics to test with the various goals of agents, multiple teams of different target agents, a secret subtur agent.” “Agents will become more capable and faster, we will match it with greater automatic monitoring and monitoring systems for protection.”

With any fate, in the process, agents will do some meaningful philanthropic work.



Leave a Comment