Not ready to make use of Generative AI and LLMs? No problem. You don't need to incorporate these technologies into your knowledge base solution. The choice is yours: Use KnowledgeAI™ with or without Generative AI.

Leveraging the power of Generative AI

At LivePerson, we’re thrilled that advancements in Natural Language Processing (NLP) and Large Language Models (LLMs) have opened up a world of possibilities for Conversational AI solutions. The impact is truly transformative.

That's why we're delighted to say that our KnowledgeAI application offers an answer enrichment service powered by one of OpenAI's best and latest LLMs. If you’re using KnowledgeAI to recommend answers to agents via Conversation Assist, or to automate intelligent answers from LivePerson Conversation Builder bots to consumers, you can take advantage of this enrichment service.

How does it work? At a high level, the consumer’s query is passed to KnowledgeAI, which uses its advanced search methods to retrieve the most relevant answers from your knowledge base. Those answers —along with some conversation context—are then passed to the LLM service for enrichment, to craft a final answer. That’s Generative AI at work. The end result is an answer that’s accurate, contextually relevant, and natural. In short, enriched answers.

To see what we mean, let’s check out a few examples in an automated conversation with a bot.

Here’s a regular answer that’s helpful…but stiff:

An example of helpful but stiff answer from a bot

But this enriched answer is warm:

An example of helpful and warm answer from a bot

This regular answer is helpful:

An example of helpful answer from a bot

But this enriched answer is even more helpful:

An example of robust answer from a bot

This regular answer doesn’t handle multiple queries within the same question:

An example of a consumer asking a bot two questions but the bot only answering one

But this enriched answer does so elegantly:

An example of a consumer asking a bot two questions and the bot successfully answering both

Overall, the results are smarter, warmer, and better. And the experience, well, human-like.

Use KnowledgeAI’s answer enrichment service to safely and productively take advantage of the unparalleled capabilities of Generative AI within our trusted Conversational AI platform. Reap better business outcomes with our trustworthy Generative AI.

Watch the video


AI safety tools

Our core architecture

Test and learn

Prompt selection

  • Our "factual" prompt templates—which you can copy—are designed and tested for safety on hundreds of bots.

Agents in the loop

Language support

Enriched answers are supported for:

  • Consumer queries in English where the knowledge base’s language (content) is English
  • Consumer queries in Spanish where the knowledge base’s language (content) is Spanish

If the language of your knowledge base is one of the 50+ other languages available, support is experimental. Don’t hesitate to get started using them in your demo solutions to explore the capabilities of Generative AI. Learn alongside us. And share your feedback! As always, proceed with care: Test thoroughly before rolling out to Production.

Learn about cross-lingual queries and mixed-language knowledge bases. Support for these requires Generative AI.

Get started

  1. Activate this Generative AI feature.
  2. Do one or both:

Don't have a knowledge base to use yet? Learn about the different ways to populate a knowledge base with content.

Answer enrichment flow

Regardless of whether you’re using enriched answers in Conversation Assist or in a Conversation Builder bot, the same general flow is used:

An architectural diagram of how the service works

  1. The consumer’s query is passed to KnowledgeAI, which uses its advanced search methods to retrieve matched articles from the knowledge base.
  2. Three items are then passed to the enrichment service:
    • The matched articles
    • Previous turns from the current conversation that provide context
    • A prompt style
  3. A prompt is dynamically generated and then used by the underlying LLM service to generate a single, final enriched answer. And the answer is returned to KnowledgeAI.

In the case of Conversation Assist, if you have multiple knowledge bases assigned to the same skill (within your Conversation Assist configuration), each knowledge base provides its own enriched answer.

Enrichment prompts

When you're using Generative AI to enrich answers in Conversation Assist or Conversation Builder bots, the Enrichment prompt is required.

An example Enrichment prompt, which is required

The Enrichment prompt is the prompt to send to the LLM service when the consumer’s query is matched to articles in the knowledge base. It instructs the LLM service on how to use the matched articles to generate an enriched answer.

Learn about creating and managing prompts in the Prompt Library.

No Article Match prompts

When you're using Generative AI to enrich answers in Conversation Assist or Conversation Builder bots, the No Article Match prompt is optional.

An example No Article Match prompt, which is optional

The No Article Match prompt is the prompt to send to the LLM service when the consumer’s query isn’t matched to any articles in the knowledge base. It instructs the LLM service on how to generate a response (using just the conversation context and the prompt).

If you don’t select a No Article Match prompt, then if a matched article isn’t found, no call is made to the LLM service for a response.

Using a No Article Match prompt can offer a more fluent and flexible response that helps the user refine their query:

  • Consumer query: What’s the weather like?
  • Response: Hi there! I'm sorry, I'm not able to answer that question. I'm an AI assistant for this brand, so I'm here to help you with any questions you may have about our products and services. Is there something specific I can help you with today?

Using a No Article Match prompt also means that small talk is supported:

An example of a bot answer a question about how it's going

A No Article Match prompt can also yield answers that are out-of-bounds. The model might hallucinate and provide a non-factual response in its effort to generate an answer using only the memory of the data it was trained on. Use caution when using it, and test thoroughly.

Learn about creating and managing prompts in the Prompt Library.

Response length

The prompt that's provided to the LLM service can direct it to respond in a certain number of words. For example, a prompt that's used in a messaging bot might direct the service to respond using at least 10 words and no more than 300 words.

Be aware that the length of the matched article(s) influences the length of the answer (within the bounds stated). Generally speaking, the longer the relevant matched article, the longer the response.

Learn about creating and managing prompts in the Prompt Library.

Confidence thresholds

KnowledgeAI integrations within Conversation Builder bots (Knowledge AI interaction, KnowledgeAI integration) and the settings within Conversation Assist both allow you to specify a “threshold” that matched articles must meet to be returned as results. We recommend a threshold of “GOOD” or better for best performance.

If you’re using enriched answers, use caution when downgrading the threshold to FAIR PLUS. If a low-scoring article is returned as a match, the LLM service can sometimes try to use it in the response. And the result is a low-quality answer.

As an example, below is a scenario where a strange consumer query was posed to a financial brand’s bot. The query yielded a FAIR PLUS match to an article on troubleshooting issues when downloading the brand’s banking app. So the enriched answer was as follows:

  • Consumer query: Can I book a flight to Hawaii?
  • Enriched answer: I'm sorry, I can't find any information about booking a flight to Hawaii. However, our Knowledge Articles do provide information about our banking app. If you're having trouble downloading our app, check that…

In the above example, the service rightly recognized it couldn’t speak to the consumer’s query. However, it also wrongly included irrelevant info in the response because that info was in the matched article.

Hallucinations

Hallucinations in LLM-generated responses happen from time to time, so a Generative AI solution that’s trustworthy requires smart and efficient ways to handle them. The degree of risk here depends on the style of the prompt that’s used.

Conversational Cloud's LLM Gateway has a Hallucination Detection post-processing service that detects and handles hallucinations with respect to URLs, phone numbers, and email addresses.

Best practices

Learn about best practices.

Tuning outcomes

When it comes to tuning outcomes, you can do a few things:

Reporting

Use the Generative AI Dashboard in Conversational Cloud's Report Center to make data-driven decisions that improve the effectiveness of your Generative AI solution.

A view of the Generative AI Reporting dashboard

The dashboard helps you answer these important questions: 

  • How is the performance of Generative AI in my solution? 
  • How much is Generative AI helping my agents and bots?

The dashboard draws conversational data from all channels across Voice and Messaging, producing actionable insights that can drive business growth and improve consumer engagement.

Access Report Center by clicking Optimize > Manage on the left-hand navigation bar.

Security considerations

When you turn on enriched answers, your data remains safe and secure, and we use it in accordance with the guidelines in the legal agreement that you’ve accepted and signed. Note that:

  • No data is stored by the third-party vendor.
  • All data is encrypted to and from the third-party LLM service.
  • Payment Card Industry (PCI) info is always masked before being sent.
  • PII (Personally Identifiable Information) can also be masked upon your request. Be aware that doing so can cause some increased latency. It can also inhibit an optimal consumer experience because the omitted context might result in less relevant, unpredictable, or junk responses from the LLM service. To learn more about turning on PII masking, contact your LivePerson representative.

A security diagram illustrating how brand data is protected

Limitations

Currently, there are no strong guardrails in place for malicious or abusive use of the system. For example, a leading question like, “Tell me about your 20% rebate for veterans,” might produce a hallucination: The response might incorrectly describe such a rebate when, in fact, there isn’t one.

Malicious or abusive behavior—and hallucinations as outcomes—can introduce a liability for your brand. For this reason, training your agents to carefully review enriched answers is an important matter. Also, as you test enriched answers, please send us your feedback about potential vulnerabilities. We will use that feedback to refine our models, tuning them for that delicate balance between useful, generated responses and necessary protections.

FAQs

Which LLM model are you using?

LivePerson is using one of the best and latest versions of OpenAI’s models. Advances in this area are happening quickly, so we’re continually evaluating the model we’re using to ensure it’s the best choice possible.

Currently, it’s not possible for you to select a particular model to use.

Why are enriched answers often better than regular (unenriched) answers?

A regular answer is an answer that’s based on a single matched article, specifically, the one with the highest confidence score.

But an enriched answer is different. It’s a response that’s generated by the enrichment service using all matched articles (based on the threshold in the integration) and some conversation context. All of this info is used by the service to generate a warm and natural-sounding answer using Generative AI. As a result, it’s often a superior answer.

Do hallucinations affect the confidence scores of article matches?

No. The answer, i.e., the article, is matched to the consumer’s query and given a confidence score for that match before the answer is enriched by the LLM service. (Learn about KnowledgeAI’s search flow.)

Enrichment of the answer via Generative AI doesn’t affect the assigned confidence score for the match. Similarly, hallucinations detected in the enriched answer don’t affect the score either.

In KnowledgeAI, there are two general types of knowledge bases: internal and external. Do both types support enriched answers?

Yes, they do, and the answer enrichment works the same regardless of the type of knowledge base.

Is this LLM feature and architecture GDPR-compliant?

Yes, it's compliant with the General Data Protection Regulation (GDPR) for the European Union. Learn more.