Introduction

If you’re using Conversation Builder bots to automate answers to consumers, you can send answers that are enriched by KnowledgeAI's LLM-powered answer enrichment service. The resulting answers, formulated via Generative AI, are:

  • Accurate
  • Contextually aware
  • Natural-sounding

An example of helpful and warm answer from a bot

Language support

Learn about language support.

Default prompt style

The default prompt style is “Factual.”

If you'd like to change the prompt style, contact your LivePerson representative. We’ll discuss your use case with you, provide relevant guidance, and make the change for you.

Before you start

When you use enriched answers, it’s important to be aware of the potential for hallucinations. This is especially important when you use them in automated conversations with bots. Unlike with Conversation Assist, there is no agent in the middle as a safeguard.

If you use enriched answers in consumer-facing bots, you accept this risk of hallucinations and the liability that it poses to your brand, as outlined in your legal agreement with LivePerson.

Follow our best practices discussed farther below: Test things out with internal bots and in Conversation Assist first.

Get started

  1. Learn about KnowledgeAI's answer enrichment service.
  2. Activate this Generative AI feature.
  3. Turn on enriched answers in Conversation Builder bots, as described next.

Turn on enriched answers

This is a 1-step process that you perform within KnowledgeAI:

  • Turn on the knowledge base’s Enriched answers via Generative AI setting. You can turn it on when you first add the knowledge base. Or, you can turn it on for an existing knowledge base via the Settings page.

    Adding a knowledge base

    The Add a knowledge base dialog with a pointer to the Enriched answers via Generative AI toggle

    Editing a knowledge base

    A knowledge base's Settings page with a pointer to the Enriched answers via Generative AI toggle

Perform just this one step. You don’t have to make any changes to the bot within Conversation Builder. Any bot that uses the knowledge base now sends enriched answers to consumers.

Configure the KnowledgeAI integration

There are two ways to integrate KnowledgeAI answers into a bot:

Using the Knowledge AI interaction

Take care when setting the Max number of answers interaction setting. In the context of enriched answers, the setting has two purposes:

  1. Determines how many matched articles to retrieve from the knowledge base and send to the LLM-powered service to generate a single, enriched answer
  2. Determines how many answers to send to the consumer when using one of the "Auto render" answer layouts.

If you set Max number of answers to 1, only the top scoring article is used by the LLM service to generate the enriched answer (purpose 1), and only that enriched answer is sent to the consumer (purpose 2). The enriched answer is always considered the top answer match, i.e., the article at index 0.

However, if you set Max number of answers to a higher number, keep in mind that the setting has two purposes. If there's more than one article match, multiple articles will be used by the LLM service to generate the single, enriched answer (purpose 1). But this also means that the consumer is sent multiple answers, i.e., the enriched answer plus some unenriched answers too (purpose 2).

It can be advantageous to specify a number higher than 1 because more knowledge coverage is provided to the LLM service when generating the enriched answer. As a result, the enriched answer is often better than when it's generated using just a single article.

That said, if you do specify a number higher than 1, it's likely you don't want to send to the consumer unenriched answers along with the enriched one. If this is your case, keep the value higher than 1, and use the "No auto rendering" (custom) answer layout to send only the enriched answer to the consumer.

The Answer layout setting in the Advanced settings of the KnowledgeAI interaction, where the Answer layout setting is set to No auto rendering

You can display just the enriched answer with:

{$.api_variableName.results[0].summary}

Important notes
  • variableName is the response data variable name that you specified in the Knowledge AI interaction's settings.
  • The enriched answer is always considered the top answer match, i.e., the article at index 0. And the content is returned in the Summary field.

If you're using a Knowledge AI interaction, use of the Auto Render, Rich answer layout is supported, but it isn’t recommended. This is because the generated answer might not align well substantively with the image or content URL associated with the highest-scoring article, which is what is used.

Using the KnowledgeAI integration

When using this integration, things are a little more straightforward because this is a more manual approach. There’s less that is automated.

In the integration, the multipleResults setting determines only how many articles to try to retrieve from the knowledge base and send to the LLM-powered service to generate a single, enriched answer.

You can set multipleResults to 1. Or you can set it to a higher number. It can be advantageous to specify a higher number because more knowledge coverage is provided to the LLM service when generating the enriched answer. As a result, the enriched answer is often better than when it's generated using just a single article.

Whatever your choice, sending the enriched answer to the consumer is a manual implementation step via one or more interactions. Thus, you have control to ensure that only the enriched answer is sent.

The enriched answer is always considered the top answer match, i.e., the article at index 0. And the content is returned in the Summary field.

Learn more about how to use this integration in a Voice bot.

Consumer experience

An example of a bot conversing with a consumer, and the bot is sending enriched answers to the consumer

Best practices

Internal bots

Consider informing the end user that info provided via the bot is generated using cutting-edge Generative AI technologies and, as such, might be prone to inaccuracy from time to time. And request feedback from your users.

All bots

Always provide pathways (dialog flows) to ensure the user’s query can be resolved.

For example, assume you have a KnowledgeAI integration in the bot’s Fallback dialog, to “catch” and try to handle the queries that aren’t handled by the bot’s regular business dialogs. This is a common scenario. Further assume that this Fallback dialog transfers the conversation to an agent if no answers are returned. In this case, we don’t recommend that you ask us to turn on the feature that calls the enrichment service even when no article matches are found. Why? Because the bot would always provide some kind of response to the consumer. And the transfer to the agent would never happen.

Prompt styles

For Conversation Builder bots, we recommend you use the “Factual” prompt style and never the “Creative” prompt style.

More best practices

See the general KnowledgeAI best practices on using our enrichment service.

Reporting

Use the Generative AI Reporting dashboard within Conversational Cloud to make data-driven decisions that improve the effectiveness of your Generative AI solution.

A view of the Generative AI Reporting dashboard

The dashboard helps you answer these important questions: 

  • How is the performance of Generative AI in my solution? 
  • How much is Generative AI helping my agents and bots?

The dashboard draws conversational data from all channels across Voice and Messaging, producing actionable insights that can drive business growth and improve consumer engagement.

FAQs

Can I use your LLM-powered features to take action on behalf of consumers?

Currently, there aren’t any LLM-powered features that trigger actions, i.e., the business-oriented dialogs in your bot. However, a common design pattern is to implement an LLM-powered KnowledgeAI integration in the bot’s Fallback dialog. This takes care of warmly and gracefully handling all consumer messages that the bot can’t handle. Once the consumer says something that triggers an intent-driven dialog, that dialog flow begins immediately.

As an example, the flow below illustrates a typical conversation between a Telco bot and a consumer inquiring about Internet packages. The consumer’s first 3 messages are handled by the Fallback dialog, but the 4th triggers the business dialog that takes action.

Flow of a conversation to a business dialog when an intent is matched or to a Fallback dialog when it isn't

The great thing about this design pattern is that it works very well with our context switching feature. Once the business dialog begins, it remains “in play” until it’s completed. So, if the consumer suddenly interjects another question that the bot can’t handle — like, “Wait, can I bundle Internet and TV together?” — the flow will move to the Fallback dialog for another LLM-powered, enriched answer. And then, importantly, it will return automatically to the business dialog after sending that answer.

The other great thing about this design pattern is that it works very well with bot groups too. That is, if the consumer’s message matches a dialog in a bot within the same bot group, that other bot’s dialog is immediately triggered. The first bot’s Fallback dialog is never triggered for an enriched answer. Instead, the consumer is taken right into the action-oriented business flow.

I’m getting different results in my deployed bot versus Conversation Builder’s Preview tool versus KnowledgeAI’s Answer Tester. Is this expected?

Semantically speaking, you should get the same answer to the same question. However, don’t expect the exact wording to be the same every time. That’s the nature of the Generative AI at work creating unique content.

What’s more, in the case of the deployed bot only, some conversation context for the current conversation is also passed to the enrichment service, to use in formulating the enriched answer. This added context can enhance the results.

Troubleshooting

My consumers aren’t being sent answers as I expect. What can I do?

In the interaction or integration (depending on the implementation approach you're using), try adjusting the Min Confidence Score for Answers (a.k.a. threshold) setting. But also see the KnowledgeAI discussion on confidence thresholds.