Introduction
LivePerson’s Conversational Cloud includes a Large Language Model (LLM) Gateway that sits between our applications and the LLM service we use for Generative AI.
The primary job of the LLM Gateway is to pass requests to the LLM service and to receive responses in return. In this role, the gateway performs some post-processing that is both vital and useful.
Hallucination Detection post-processing
Hallucinations are situations where the LLM service generates incorrect or nonsensical responses, or responses that aren't grounded in the contextual data or brand knowledge that was provided.
Typically, hallucinations happen when the LLM service relies too heavily on its language model and fails to effectively leverage the provided source content.
Since hallucinations by an LLM service can occur, detection is vital.
What hallucinations are detected
The Hallucination Detection post-processing service in our LLM Gateway takes the response received from the LLM service and checks it for the following types of hallucinations:
- URLs
- Phone numbers
- Email addresses
How hallucinations are handled
The LLM Gateway handles requests from multiple LivePerson client applications, and different applications have different requirements when it comes to what should be done with hallucinations. So, the Hallucination Detection post-processing service supports two behaviors:
- Marking the hallucination in the response
- Rephrasing the response without the hallucination
Marking the hallucination in the response
This functionality can be used for URLs, phone numbers, and email addresses.
When this functionality is used, the hallucination isn’t removed. The service marks or delineates the data point that’s deemed unfounded, so the client application that receives the response 1) can understand where it’s located in the response, and 2) can handle it as required.
Our KnowledgeAI application takes advantage of this functionality when returning answers to our Conversation Assist application. In turn, Conversation Assist masks the hallucination and substitutes in a placeholder for needed info within the recommended answer. This is so the agent can quickly fix the issue (enter the correct info) before sending the answer to the consumer. Learn more.
Rephrasing the response to exclude the hallucination
This functionality can be used for URLs, phone numbers, and email addresses.
When this functionality is used, the service tries to return a rephrased response without the hallucination. If the response still contains the data point or statement that’s deemed unfounded, the service discards the response and instead returns the following response to the client application:
I'm sorry, but I don't have that information available to me. Please contact our customer care team for further assistance.
Currently, this “fallback” response isn’t customizable, but stay tuned for enhancements in this area.
Our KnowledgeAI application takes advantage of this functionality when returning answers to our Conversation Builder application. This is so bots don’t send messages containing these types of hallucinations in automated conversations. Learn more.
Supporting the goal of trustworthy Generative AI
LivePerson’s Hallucination Detection post-processing is designed to protect your Conversational AI solution from hallucinations in LLM-powered responses. For this reason, you can’t disable it, nor should you try to work around it. All brands should also put in place appropriate human oversight to minimize user exposure to hallucinations.
URL post-processing
In Messaging contexts, URLs that are clickable are always handy and support an optimal experience for the end user. However, responses from the Large Language Model (LLM) service — which are formed via Generative AI — are always returned as plain text.
That’s where our URL post-processing service in the LLM Gateway comes in. It takes all the URLs in the response and wraps them in HTML tags, so they become active, clickable URLs.
Default configuration
By default, the URL post-processing service is on for KnowledgeAI for all accounts. To turn it off, contact your LivePerson representative.