Not ready to make use of Generative AI and LLMs? No problem. You don't need to incorporate these technologies into your solution. The choice is yours: Use Conversation Assist with or without Generative AI.

Introduction

If you’re using Conversation Assist to offer answer recommendations to your agents, you can offer ones that are enriched by KnowledgeAI's LLM-powered answer enrichment service. We call this offering Copilot Assist. The resulting answers, formulated via Generative AI, are:

  • Grounded in knowledge base content
  • Contextually aware
  • Natural-sounding

An example of enriched answers being offered to an agent inline in a conversation, as well as via the On-Demand Recommendations widget

Language support

Learn about language support.

Default prompt

To get you up and running quickly, Conversation Assist makes use of a default Enrichment prompt. An Enrichment prompt is required, and we don't want you to have to spend time on selecting one when you're just getting started and exploring.

Here's the default prompt that Conversation Assist passes to KnowledgeAI when requesting enriched answers:

The default prompt for knowledge recommendation sources in Conversation Assist

Use the default prompt for a short time during exploration. But be aware that LivePerson can change it without notice, altering the behavior of your solution accordingly. To avoid this, at your earliest convenience, duplicate the prompt and use the copy, or select another prompt from the Prompt Library.

You can learn more about the default prompt by reviewing its description in the Prompt Library.

Get started

  1. Learn about KnowledgeAI's answer enrichment service.
  2. Activate this Generative AI feature.
  3. Turn on enriched answer recommendations in Conversation Assist, as described next.

Turn on enriched answers

You turn on enriched answers at the knowledge base level in Conversation Assist. This means you can turn on the feature for some knowledge bases that you're using as recommendations sources but not others.

  • In Conversation Assist, open the recommendation source (knowledge base) for editing, and turn on the Enriched answers via Generative AI setting. Click Save.

    Turning on the Enriched answers via Generative AI setting within a knowledge base recommendation source

Select a prompt

The process of selecting or changing a prompt is the same for the Enrichment prompt (required) and the No Article Match prompts (optional).

  1. In Conversation Assist, go to the Settings page.
  2. Scroll down to the Prompts section under Answer recommendations.
  3. Select the existing prompt to open the Prompt Library.
  4. Click Go to library in the lower-left corner.
  5. In My Prompts, select the prompt you want to use.
  6. Click Select.

Changing the prompt to use in the interaction

You can't specify different prompts for different knowledge bases that you're using as recommendation sources. You specify a single Enrichment prompt and optionally a single No Article Match prompt for all knowledge bases at the account level, in Settings.

When you're in the Prompt Library selecting a prompt, you can also create, edit, and copy prompts on the fly.

Agent experience

An example of enriched answers being offered to an agent inline in a conversation, as well as via the On-Demand Recommendations widget

Hallucination handling

Hallucinations in LLM-generated responses happen from time to time, so a Generative AI solution that’s trustworthy requires smart and efficient ways to handle them.

When returning answers to Conversation Assist, by default, KnowledgeAI takes advantage of our LLM Gateway’s ability to mark hallucinated URLs, phone numbers, and emails. This is done so that the client application—in this case, Conversation Assist—can understand where the hallucination is located in the response and handle it as required.

For its part, when Conversation Assist receives a recommended answer that contains a marked hallucination (URL, phone number, or email address), it automatically masks the hallucination and replaces it with a placeholder for the right info. These placeholders are visually highlighted for agents, so they can quickly see where to take action and fill in the right info.

A hallucinate that's been removed with the placeholder highlighted for the agent

Check out our animated example below: A hallucinated URL has been detected and masked. The agent sees the placeholder, enters the right URL, and sends the fixed response to the consumer.


To make quick work of filling in placeholders, make your contact info available as predefined content. This exposes the content on the Replies tab in the On-Demand Recommendations widget. The agent can copy the info with a single click and paste it where needed. We’ve illustrated this in our animation above.

Best practices

Train your agents

Train your agents on the difference between regular answer recommendations and enriched answer recommendations, the need to review the latter with care, and the reasons why. Similar important guidance is offered in the UI:

The tooltip for an enriched answer that guides the agent to review for accuracy and appropriateness before sending the answer

Your agents are able to edit enriched answers before sending them to consumers.

The tooltip for an enriched answer offered via the On-Demand Recommendations widget, where the tooltip indicates the agent can edit the answer before sending

More best practices

See the general KnowledgeAI best practices on using our enrichment service.

Reporting

Use the Generative AI Dashboard in Conversational Cloud's Report Center to make data-driven decisions that improve the effectiveness of your Generative AI solution.

A view of the Generative AI Reporting dashboard

The dashboard helps you answer these important questions: 

  • How is the performance of Generative AI in my solution? 
  • How much is Generative AI helping my agents and bots?

The dashboard draws conversational data from all channels across Voice and Messaging, producing actionable insights that can drive business growth and improve consumer engagement.

Access Report Center by clicking Optimize > Manage on the left-hand navigation bar.

Limitations

Answer recommendations that are enriched via Generative AI can be plain answers…or they can be rich answers that contain an image and links (see an example).

Rich answers that are also enriched via Generative AI are supported. However, be aware that sometimes the generated answer from the LLM might not perfectly align with the image/links associated with the highest-scoring article, which are what are used.

Rich content only appears in an enriched answer when the generated answer is based on an article that has specified content links (image URL, content link, video link, etc.).

FAQs

I'm offering bot recommendations to my agents, and I want those bots to send answers that are enriched via Generative AI. How do I turn this on?

This configuration is done at the interaction level in the bot, so you do this in LivePerson Conversation Builder, not in Conversation Assist. Learn about automating enriched answers.

When enriched answers via Generative AI are turned on, is there any change to how answer recommendations are made?

Yes.

Review the logic on how answer recommendations are made when the answers aren't enriched via Generative AI.

When the answers are enriched, things change a little.

If the Enriched answers via Generative AI setting is turned on for a knowledge base within Conversation Assist, then when that knowledge base is queried for article matches, several of the highest-scoring articles are retrieved, not just the top one. Keep in mind that only articles that meet the Answer Confidence threshold are retrieved. All of the retrieved articles are then sent to the LLM service for a single, enriched answer recommendation.

We make the above change when you’re using Generative AI because, when more knowledge coverage is provided to the LLM service when generating the enriched answer, the result is often better than when the response is generated using just a single article.

For a consistent agent experience, in the Agent Workspace you can always find enriched answers offered first, followed by answers that aren’t enriched, and bots listed last. Within each grouping, the recommendations are then ordered by their confidence scores.

The confidence score shown for the enriched answer recommendation is the confidence score of the highest-scoring article that was retrieved from the knowledge base.

No. The answer, i.e., the article, is matched to the consumer’s query and given a confidence score for that match before the answer is enriched by the LLM service. (Learn about KnowledgeAI’s search flow.)

Enrichment of the answer via Generative AI doesn’t affect the assigned confidence score for the match. Similarly, hallucinations detected in the enriched answer don’t affect the score either.

More FAQs

See the general KnowledgeAI FAQs on our enrichment service.

Troubleshooting

My agents aren’t being offered answer recommendations as I expect. What can I do?

Within Conversation Assist, on the Settings page, try adjusting the Answer confidence setting. But also see the KnowledgeAI discussion on confidence thresholds.