Google is taking reservations to talk to its supposedly-sentient chatbot

At last May’s I/O 2022 conference, Google CEO Sundar Pichai announced that the company would gradually roll out its experimental LaMDA 2 conversational AI model to select beta users over the coming months. Those months have come. On Thursday, researchers at Google’s AI division announced that interested users can register to explore the model once access becomes increasingly available.

Regular readers will recognize LaMDA as the supposedly sentient NLP (Natural Language Processing) model that got a Google researcher fired. A class of AI models designed to decompose human speech into actionable commands, NLPs are behind the functionality of digital assistants and chatbots like Siri or Alexa and do the heavy lifting for real-time translation and subtitling apps. Basically, when you talk to a computer, it uses NLP technology to listen.

“Sorry, I didn’t quite understand that” is a phrase that still haunts the dreams of many early Siri users, even though NLP technology has evolved rapidly over the past decade. Today’s models are trained on hundreds of billions of parameters, can translate hundreds of languages ​​in real time, and even transfer lessons learned in one conversation to subsequent chats.

Google’s AI test kitchen will allow beta users to experiment and explore interactions with the NLP in a controlled, presumably monitored, sandbox. Access is rolling out today to small groups of US Android users before expanding to iOS devices in the coming weeks. The program offers a series of guided demos that show users the capabilities of LaMDA.

“The first demo, ‘Imagine It,’ lets you name a place and offers ways to let your imagination run wild,” wrote Tris Warkentin, group product manager at Google Research, and Josh Woodward, senior director of product management for Labs at Google a Google AI Blog Thursday. “With the ‘List It’ demo, you can share a goal or topic, and LaMDA will break it down into a list of helpful sub-tasks. And in the ‘Talk About It (Dogs Edition)’ demo you can have fun, frank talk about dogs and only dogswhich examines LaMDA’s ability to stay on topic even when you try to deviate from the topic.”

The focus on safe, responsible interactions is common in an industry where there’s already a name for go-full-Nazi chatbot AIs, and that name in Taye. Luckily, this deeply embarrassing incident was a lesson Microsoft and much of the rest of the AI ​​field has taken to heart, which is why we’re seeing such severe limitations on what users can summon Midjourney or Dall-E 2, or what Facebook themes Blenderbot 3 can do to discuss.

That’s not to say the system is foolproof. “We ran special rounds of testing against opponents to find additional errors in the model,” Warkentin and Woodward wrote. “We recruited experienced red team members… who uncovered additional damaging but subtle results.” These include failure to “elicit a reaction when used because of difficulty distinguishing between benign and adversarial prompts” and ” harmful or toxic reactions based on biases in his training data”. Just like many AIs are used to these days.

All products recommended by Engadget are selected by our editorial team independently from our parent company. Some of our stories contain affiliate links. If you buy something through one of these links, we may receive an affiliate commission.

https://www.engadget.com/google-begins-taking-reservations-for-its-upcoming-ai-test-kitchen-210023173.html?src=rss Google is taking reservations to talk to its supposedly-sentient chatbot

Russell Falcon

USTimesPost.com is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@ustimespost.com. The content will be deleted within 24 hours.

Related Articles

Back to top button