This feature is in Early Access release. To enable this feature, contact your LivePerson representative.
Enhancing the user’s query using conversation context
By default, when a user’s query is used to find an answer (a matched article) in a knowledge base, that query is just a single utterance—the most recent one—in the conversation.
The challenge with that is that often a single utterance doesn’t provide enough context to effectively retrieve a high quality answer.
Consider the following fictitious conversation between a consumer and an agent. Note the consumer’s final query:
Consumer: Hi there! I’m interested in signing up for a new mobile phone plan. Could you help me with that?
Agent: Absolutely! I'd be happy to help. Are you looking for a specific type of plan, or would you like me to go over our options with you?
Consumer: I’m not entirely sure yet. I need something with a good amount of data, as I stream a lot of videos. But I don’t want to spend too much.
Agent: Got it. We have a few options that might suit your needs. Our most popular plan includes 10GB of high-speed data, unlimited calls, and texts for $40 per month. If you think you'll need more data, we also have a 20GB plan for $60 per month.
Consumer: Hmm, I think the first option might be enough. Sign me up!
As is so often the case in a natural conversation, the consumer’s final query above lacks specificity: What’s the first option? Sign up for what? This info is understood only if one has more of the conversation’s context.
KnowledgeAI™ solves the issue of a suboptimal consumer query by way of query contextualization. When requested, KnowledgeAI gathers additional “turns” in the conversation, and sends them and the user’s latest utterance to an in-house, state-of-the-art, LivePerson small language model. The model uses the info to rephrase the consumer’s query.
In our example above, the consumer’s final query might be rephrased to something like:
Consumer: I need to sign up for a new mobile phone plan that includes 10GB of high-speed data, unlimited calls, and texts for $40 per month.
Or:
Consumer: I'm interested in signing up for a new mobile phone plan. The first choice might be enough. Sign me up.
How query contextualization works
When a calling application, such as Conversation Assist, sends a request to KnowledgeAI to retrieve answers (articles) that match a user’s query, query contextualization works as follows:
- The request from the calling application is checked to see if the query should be enhanced using the conversation’s context. If the answer is No, it moves the flow directly to KnowledgeAI’s search flow (query enhancement is skipped). If the answer is Yes, it moves the flow to the next step.
- The query’s type is checked: request for help, small talk (chitchat), etc. If the type is small talk, it moves the flow directly to the knowledge base search (query enhancement is skipped). If the type is not small talk, it moves the flow to the next step.
- The query is rephrased using the conversation’s context, and then the flow moves to the knowledge base search.
Original query versus enhanced query: Comparing the results
Worried that the original query might yield a superior result? Our testing has revealed that this is an unlikely scenario. But in this event, we have this covered: When you choose to use query contextualization, KnowledgeAI automatically performs a parallel search of the knowledge base using the original query. The confidence scores of the results are then compared. The inferior results are discarded, and the superior results (the ones with the top scoring article) are kept and passed along in the flow.
Language limitation
Queries that are not in English are not rephrased using the conversation context. The model automatically rejects the request and simply returns the original query.
The above said, we appreciate your feedback here on areas for enhancement. If you have a request for support of another language, please contact your LivePerson representative.
LLM used for query contextualization
The LLM that rephrases the query is a state-of-the-art, LivePerson small language model (a decoder) that’s fine tuned for query contextualization tasks.
It’s not possible to customize the prompt that’s sent to the model. The prompt is tailored to suit the model.
Applications using query contextualization and the ROI
Currently, support for query contextualization is only available if the calling application is Conversation Assist. You can turn it on in a knowledge base rule.
LivePerson strongly recommends that you turn on this feature. When used in Conversation Assist, it yields answer recommendations that are more accurate and relevant, which increases the rate at which your agents use recommendations. In turn, this reduces the effort (in-focus time) expended by your agents, increases their efficiency (response time), and ultimately improves the overall conversational experience (CSAT, NPS) for the consumer.
FAQs
I have an “external KB without LivePerson AI.” Can I use query contextualization?
Yes, the feature is supported with all types of knowledge bases, including external knowledge bases without LivePerson AI. This is because query contextualization has to do with enhancing (rephrasing) the query before it is used to fetch matched articles from the knowledge base.