Introduction

Hallucinations are situations where the LLM service generates incorrect or nonsensical responses, or responses that aren't grounded in the contextual data or brand knowledge that was provided.

Typically, hallucinations happen when the LLM service relies too heavily on its language model and fails to effectively leverage the provided source content.

Since hallucinations by an LLM service can occur, detection is vital.

What hallucinations are detected

The Hallucination Detection post-processing service in our LLM Gateway takes the response received from the LLM service and checks it for the following types of hallucinations:

  • URLs
  • Phone numbers
  • Email addresses

How hallucinations are handled

The LLM Gateway handles requests from multiple LivePerson client applications, and different applications have different requirements when it comes to what should be done with hallucinations. So, the Hallucination Detection post-processing service supports two behaviors:

  • Marking the hallucination in the response
  • Rephrasing the response without the hallucination

Marking the hallucination in the response

This functionality can be used for URLs, phone numbers, and email addresses.

When this functionality is used, the hallucination isn’t removed. The service marks or delineates the data point that’s deemed unfounded, so the client application that receives the response can understand where it’s located in the response, and can handle it as required.

Our KnowledgeAI application takes advantage of this functionality when returning answers to our Conversation Assist application. In turn, Conversation Assist masks the hallucination and substitutes in a placeholder for needed info within the recommended answer. This is so the agent can quickly fix the issue (enter the correct info) before sending the answer to the consumer. Learn more.

Rephrasing the response to exclude the hallucination

This functionality can be used for URLs, phone numbers, and email addresses.

When this functionality is used, the service tries to return a rephrased response without the hallucination. If the response still contains the data point or statement that’s deemed unfounded, the service discards the response and instead returns the following response to the client application:

I'm sorry, but I don't have that information available to me. Please contact our customer care team for further assistance.

Currently, this “fallback” response isn’t customizable, but stay tuned for enhancements in this area.

Our KnowledgeAI application takes advantage of this functionality when returning answers to our Conversation Builder application. This is so bots don’t send messages containing these types of hallucinations in automated conversations. Learn more.

Supporting the goal of trustworthy Generative AI

LivePerson’s Hallucination Detection post-processing is designed to protect your Conversational AI solution from hallucinations in LLM-powered responses. For this reason, you can’t disable it, nor should you try to work around it. All brands should also put in place appropriate human oversight to minimize user exposure to hallucinations.