Is LaMDA All Hype Or Will it Revolutionise Chatbots?
Google’s LaMDA conversational technology shows signs of a major breakthrough but will it turn the chatbot industry on its head?
Have you ever continued a conversation with a chatbot and soon ended up at a dead-end? Perhaps the bot gave you an irrelevant response that didn’t make any sense, or it proposed transferring you to a real human agent. Despite all the developments in conversational AI in the last few years, it is still difficult to get a chatbot to carry a conversation with a user.
Google recently announced an AI system called LaMDA that could change that, and supercharge a chatbot’s functionality. This has caused a tremendous buzz in the chatbot industry due to its potential.
What is the LaMDA conversational technology?
LaMDA, short for “Language Model for Dialogue Applications” is part of Google’s attempt to make search more conversational. It uses a new ML architecture known as Transformers, a model that also forms the basis for powerful NLP systems including GPT-3 and Google’s BERT.
LaMDA is essentially a large language model (LLM)—a deep-learning algorithm that is trained on enormous amounts of text data. Think of these as autocomplete on steroids. They ingest large sets of sentences and dialogues to learn the patterns by which these can be assembled in a way that makes sense. LLMs can therefore be used to enhance the interactivity of chatbots by making them more conversational.
According to Sundar Pichai, Google CEO Sundar Pichai, LaMDA is still in research and development, although they have been using it internally to explore “novel interactions”.
What makes LaMDA different?
The chatbots of today are getting better at understanding users’ queries and responding intelligently. But most implementations still run into a dead-end if the users’ queries jump between topics. This is where LaMDA shows great promise by being able to carry out conversations in a “free-flowing way”, meandering between endless topics and without taking the same path twice. .
The unique thing about LaMDA is that unlike other language models, it was trained on dialogue and built for dialogue applications
The unique thing about LaMDA is that unlike other language models, it was trained on dialogue and built for dialogue applications. None of the concepts were pre-defined. It learned concepts based on training data. Being “open-domain” it is designed to converse on any topic. And here's a cool aspect: it does not have to be retrained to have another conversation.
Nuances of open-ended conversations
Chatbot enthusiasts and those who are familiar with development will be aware of conversation design best practices like including prompts, acknowledgements and confirmations. Certain attributes like quality, quantity, relevance and empathy are also key to making chatbots work.
According to Google, during LaMDA’s training, some other interesting attributes were also uncovered, which allow for open-ended conversations. These include:
Sensibleness – A nuance that suggests whether a response makes sense in context
Interestingness – which looks at whether responses are insightful, unexpected or witty
Factuality – making sure that the answers are factually correct.
What does this mean for chatbots?
The potential use cases of a more conversational chatbot are endless. Although LaMDA is still in its early days of research, organisations looking to deploy enterprise-wide chatbots should start looking to the future. Making interactions with customers more natural and conversational using chatbots based on models like LaMDA could allow them to:
Spend less time and resources on training
If the bot can train itself based on the data and learn to converse across topics, this will relieve data scientists from pre-programming concepts and organisations from preparing domain-specific data.
Reduce dead-ends and drop-offs
It doesn’t take much for consumers to lose their attention these days. Users want instant responses to their queries and any delays or dead-ends can lead to frustration, causing them to drop off or turn away to competitor websites. So a chatbot that can carry a conversation in a natural and engaging way, with helpful responses have a better chance of leading to completion.
Improve customer experience
A natural consequence of fewer dead-ends and drop-offs is that users end up getting what they want. This way, more users are satisfied and the chatbot helps improve the overall customer experience.
Drive more conversions
Chatbots are not only used for sharing information and responding to FAQs. They can be used for automating more transactional tasks like booking appointments at a healthcare institution or submitting claims on an insurance website. If the chatbot can keep users engaged through conversations long enough and also provide them with the right responses to complete transactions, this could help drive more conversions via the bot itself.
These are still early days but the potential of LaMDA shows that conversational AI technology is advancing at a rapid rate. Two years ago, when the GPT-2 model was released, it became the biggest language model at the time. Then a mere two months later, Microsoft released their Turing NLG model to topple GPT-2.
This was followed by GPT-3 eclipsing everything before. The conversational AI technology landscape is evolving too quickly for businesses to settle on any one model. The prudent thing to do would be to wait and watch where the technology evolves in around 1-2 years.
To explore how KeyReply is staying at the forefront of conversational AI technology, ask us for a free demo today.