Conversational AI platforms are increasingly becoming critical to digital transformation plans. Here are 3 essential features that must be at the core of every solution.
Chatbots, virtual assistants, conversational AI agents. All of these are solutions that help automate conversations and query responses at varying levels of complexity. Chatbots in their earliest incarnations were simple bots that pushed notifications to users. These evolved to use rule-based models that automated responses to FAQs.
Today’s versions are much more complicated however, incorporating natural language processing (NLP) and machine learning (ML) models to be more conversational and predictive. Which means they can understand intent, entities and their variations, context and come with the most human-like personality possible.
IBM, Google, and Amazon are some companies classified as “visionary leaders” in the chatbot market, according to a 2019 study conducted by MarketsandMarkets. Other players studied in the landscape include “innovators” like Artificial Solutions and emerging companies like KeyReply.
The intelligence and efficiency of conversational AI solutions differ in the extent to which they have mastered 3 key components:
- The AI Model
- ‘Creative’ Engineering
- Domain Expertise
We will explore these in detail but there is one other fundamental aspect of building conversational AI that organisations need to consider before anything else: Data preparation.
Step 0: Data Preparation – the Foundation of Every Conversational AI
Before embarking on any chatbot adoption initiative, the first step should be the collection, cleaning and categorisation of data that will be used for training. Think of data preparation as the foundational step in every conversational AI project, without which you may not be sure of the success of the entire enterprise.
Think of data preparation as the foundational step in every conversational AI project, without which you may not be sure of the success of the entire project.
This step is where we collect examples of real user queries, with all their intents, entities, and variations to create a bot training data set. Intents refer to the intentions behind questions. Many questions, phrased differently can have the same intent behind them.
All variations mean the same thing:
“I want to book an appointment for a health screening”
There can also be variations of questions with the same intent in different languages. Intents for questions that are almost similar to each other, can also be conflicting and must be separated. Consider these three queries:
- “What tests are covered in basic health check up?”
- “What tests are covered in executive health screening?”
- “What tests are covered in X-ray"
- “What tests are covered in premium screening?”
To a human, these are 3 different scenarios with distinct intents. But machines do not think like humans. To a machine, these 3 questions are one and the same.
This is where entities come in. Entities are lists of key terms or phrases that help give context to the intent for a question. The items “basic health check up”, “executive health screening”, “X-ray“ and “premium screening” all refer to different medical test packages. The conversational AI agent can then understand questions about each and respond according to each entity.
By providing context to the intent, entities help bots address more scenarios with just one sentence structure. Covering all the scenarios then simply becomes a matter of covering all the entities. In other words, entities also help scale up the scope coverage of the bot with the same amount of training data and model.
“What tests are covered in basic health check up?”
basic health screening
“What tests are covered in executive health screening?”
executive health screening
“What tests are covered in X-ray"
“What tests are covered in premium screening?”
premium health screening
When collecting the data, it is important to capture all such examples of intents and entities and label them clearly with a consistent naming convention. Take care to avoid labelling questions with different intent in the same example.
If done well, the data preparation step will define the scope that the chatbot will need to cover and ensure that it can respond to most user queries correctly.
|PRO TIP 1: As a best practice, aim to collect at least 10 to 20 examples in each language.|
It is also critical to collect real world data as much as possible when training the bot. This is to ensure that the data set is a good enough representation of how real users ask questions. A good place to start is to talk to the more customer-facing teams in your organisation who interact with customers daily.
|PRO TIP 2: Gather information from the teams in your organisation who interact with customers on a daily basis. They will be able to prioritize topics and help you define clearly what customers really need help with.|
Now that we have set up the importance of the data preparation step, let us jump into the 3 other features.
1. The AI Model
The type and complexity of the model used to train the bot makes a substantial difference. There was a time when only the big tech giants like IBM and Microsoft had the know-how and processing capabilities to work on complex NLP and ML models.
But over the years, the open source movement, access to public data sets and cloud computing has led to the commoditisation of AI and chatbot products. This has crowded the market with many providers competing for superiority but with only negligible differences in their capabilities. Thus, there is a limit to the extent to which generalised model can differentiate the conversational AI today.
As a basic requirement, look for providers that invest in dedicated R&D teams (both software and hardware), with an AI tech stack that includes ML and NLP algorithms capable of handling intents, entity engines, contextual knowledge and sentiment analysis.
2. ‘Creative’ Engineering
As technical a task as it is to develop bots that can hold intelligent conversations with users, it takes a lot of creativity to stand out from the competition. Mere integration of a bot to existing systems alone will not cut it any more.
Instead, the most advanced solutions will have figured out ingenious ways of engineering the solution – from data cleaning to effective testing methods and layering of NLP models. Providers who can come up with innovative methods of pulling data from various data sets to discover unseen correlations, patterns and insights will hold an advantage.
Providers who can come up with innovative methods of pulling data from various data sets to discover unseen correlations, patterns and insights will hold an advantage.
They may not be apparent to the end user, who only interacts with the user interface, which looks the same – a chat widget. However, rest assured these nuanced engineering decisions make a significant difference to the solution.
Take the case of a conversational AI tasked with disseminating helpful information about the COVID-19 pandemic. As a rapidly evolving subject, with a growing volume of expected queries, it may too hard and time-consuming for small teams to keep up and scale their hand-crafted responses using traditional NLP techniques for maintaining questions and answer sets.
At the same time, one can expect authoritative sites like Centers for Disease Control and Prevention to constantly be updating their libraries with the most up-to-date information. If this info can be scraped and layered on top of the traditional NLP model providing hand-crafted responses, the challenge of scaling becomes way more manageable.
3. Domain Expertise
The technology behind conversational AI by itself is industry agnostic. But each use case has its own needs depending on the domain or industry in question. A conversational AI trained on data in the finance and banking sector may come up short when used in a healthcare context.
The technology behind conversational AI by itself is industry agnostic. It is the domain expertise that makes the difference in specific use cases.
Domain expertise includes two branches: Ontology and Protocols
Ontology refers to a set of concepts and categories in a subject area or domain that shows their properties and the relations between them. Each industry has terms, concepts and ways in which they relate to each other that are unique to its own domain.
For example, in the banking and financial services sector, common user queries could relate to loans, interest rates and account creation for personal banking. In the insurance sector, the topics could include life insurance, health insurance, critical illness coverage and premium payments.
Healthcare institutions often must deal with questions about health screening appointments, doctor consultations for specific symptoms and issues with prioritising patient screening during crises. The vocabulary used in the day to day operations is also important. How people explain their symptoms, common misconceptions about flu and medical terminology corresponding to the layman’s descriptions – all these need to be incorporated into the development of the conversational AI.
How people explain their symptoms, common misconceptions about flu and medical terminology corresponding to the layman’s descriptions – all these need to be incorporated into the development of the conversational AI.
A thorough understanding of how people talk and expertise in the language of the domain is essential to enable the conversational AI to run effectively.
A thorough understanding of how people talk and expertise in the language of the domain is therefore essential to enable the conversational AI to run effectively. This is another reason for capturing all relevant conversations and dialogue, industry jargon and real-world queries.
b) Rules and Protocols
Depending on the use case, conversational AI solutions can also have varying impact on the end user. In the entertainment industry, retail or customer service, for instance, the topics that chatbots deal with are relatively harmless. Being directed to the wrong product page or discount details may not signal the end of the world.
But in high impact use cases like healthcare, where it is often a life or death matter, the responses must comply with clinical protocols. There is often a standard operating procedure, which dictates the guidelines for responses.
In high impact use cases like healthcare, where it is often a life or death matter, the responses must comply with clinical protocols.
Consider a scenario of a customer enquiring about the procedures to treat her child who might be going through a trauma because of a fall. Protocol would demand that the bot ask first about the height of the fall, the child’s current state of consciousness and if the child is experiencing a seizure before proceeding with directions.
In such cases, the NLP models and AI have to take a backseat and cannot bypass or deviate from the protocols. This is to ensure there are no casualties due to any errors.
There are plenty more features that could set apart one solution from another, especially in niche industries and use cases. But the above three qualities form the core of any solution that can claim to be realistically operating in the conversational AI landscape today.
Find out more about how KeyReply incorporates these key attributes in its conversational AI solution.