Home / Tech / News / Google I/O 2020: LaMDA and the power of conversations

Google I/O 2020: LaMDA and the power of conversations

A sample conversation with LaMDA, short for Language Model for Dialogue Applications, shown during the virtual Google I/O Developers Conference. 
A sample conversation with LaMDA, short for Language Model for Dialogue Applications, shown during the virtual Google I/O Developers Conference.  (Bloomberg)

One of the most interesting things Google showcased at the event last night was LaMDA or Language Model for Dailogue Applications. Here's what it can do…

Google claims they have a “soft spot” for language and mentioned casually how they began by translating the web early on. Then there were the machine learning techniques they invented to help understand search queries better. It’s been a journey but there’s “always room for improvement”, says the company. “Versatility makes language one of humanity’s greatest tools — and one of computer science’s most difficult puzzles,” Google explained as they introduced LaMDA.

LaMDA is all about conversations. “While conversations tend to revolve around specific topics, their open-ended nature means they can start in one place and end up somewhere completely different,” Google points out. And this meandering quality can and does stump modern conversation agents like the chatbots, that only know how to follow pre-defined paths.

This is where Google’s LaMDA comes in. LaMDA is short for “Language Model for Dialogue Applications” and Google calls it their “breakthrough conversation technology”. LaMDA can engage in free-flowing conversations over a number of topics and this, as Google hopes, will help “unlock more natural ways of interacting with technology and entirely new categories of helpful applications”. These conversation skills have been built on Transformer, as it has been with BERT and GPT-3. Transformer is a neural network architecture invented by Google Research and open-spurced in 2017. This architecture can produce models that can be trained to read, pay attention to how words relate to each other and then predict what words can come next.

Unlike other language models, Google says LaMDA has been trained on dialogue and turning its training, “it picked up on several of the nuances that distinguish open-ended conversation from other forms of language” and one of those nuances is “sensibleness”, better explained as - “Does the response to a given conversational context make sense?”.

“But sensibleness isn’t the only thing that makes a good response. After all, the phrase “that’s nice” is a sensible response to nearly any statement, much in the way “I don’t know” is a sensible response to most questions,” points out Google. Satisfying responses need to be specific and they need to relate clearly to the context of the conversation.

LaMDA builds on earlier Google research that showed Transformer-based language models trained on dialogue could learn to talk about virtually anything. And “sensibleness” and “specificiality” are not the only things Google is looking for in a model like LaMDA, the company is also looking into “interestingness” by assessing if the responses are unexpected, witty, insightful, and factual. The company is “investigating ways to ensure LaMDA’s responses aren’t just compelling but correct”.

Besides all this, Google is also looking into preventing the misuse of a model like LaMDA to propagate biases, mirroring hate speech, and replicating misleading information. “Even when the language it’s trained on is carefully vetted, the model itself can still be put to ill use,” the company points out.

Follow HT Tech for the latest tech news and reviews, also keep up with us on Twitter, Facebook, and Instagram. For our latest videos, subscribe to our YouTube channel.