On Tuesday, Google is publishing three more AI tests -aims to help people learn to speak a new language more personally. Although the tests are still in the early stages, it is possible that the company is seeking to accept Doolingo with Google’s Multimodal Large Language Model Jemini.
The first test helps you to quickly learn the specific phrases you need at the moment, while the second test helps you to make less formal and more words like locals.
The third test lets you use your camera to learn the new word based on your surroundings.

Google Note that one of the most frustrating parts of learning a new language is when you find yourself in situations where you need a certain phrase that you haven’t learned yet.
With the help of the new “small lesson” test you can describe a situation, such as “looking for lost passports” in the context of vocabulary and grammar tips. You can also get advice for the reactions, like “I don’t know where I lost it” or “I want to report it to the police.”
The next test, “Slang Hang”, wants to help people listen less like textbooks while speaking in a new language. Google says that when you learn a new language, you often learn to speak formally, which is why it tests people to teach people to talk more conversations and experiments with local abuse.

With this feature, you can create a realistic conversation between native speakers and see how the dialogue expresses a message once again. For example, you can learn through a conversation where a street seller is chatting with a customer, or a situation where two long-haired friends are re-combining on the subway. You can walk around with such conditions to find out what they mean and how they are used.
Google says that the test sometimes makes some unnecessary abuse and sometimes produces words, so users need to cross-reference to reliable sources.

The third test, “Word Cam” allows you to snap a picture around you, then detect Mithini objects and label them in the language you are learning. The feature also gives you additional words that you can use to describe objects.
Google says that sometimes you need only words for things you have in front of you, because it shows you how much you don’t know yet. For example, you know the word “window” but you may not know the word “blind”.
The company notes that the idea behind these tests is to see how AI can be used to make independent education more dynamic and personalized.
New experiments support the following languages: Arabic, Chinese (Chinese), Chinese (Hong Kong), Chinese (Taiwan), English (AU), English (UK), English (US), French (France), French (France), German, Greek, Hindi, Hindi, Portugil, Brazil) and Turki. Tools can be accessed through Google LabsThe
