For a flexible architecture and an optimal consumer experience, follow these best practices.
Raising the quality of answers
There are several best practices you can follow to raise the quality of answers.
If you created the articles by importing content, always check for import errors and substantively review the articles that were created.
Evaluate whether long articles can be broken into smaller ones.
KnowledgeAI integrations within Conversation Builder bots (Knowledge AI interaction, KnowledgeAI integration) and the settings within Conversation Assist both allow you to specify a “threshold” that matched articles must meet to be returned as results. We recommend a threshold of “GOOD” or better for best performance.
The actual KnowledgeAI search for relevant answers (matched articles) in your knowledge base is an important part of any KnowledgeAI integration.
Before you get too far with your use case (Conversation Assist, Conversation Builder bot, etc.), use the Answer Tester tool to test the article matching. This can help to ensure you get the performance you expect.
Here below, we're testing the article matching when using enriched answers.
Common best practices
- Design a modular approach, where each knowledge base supports a particular classification in your business: Create knowledge bases per category, likewise split the intents into domains based on category, and add multiple knowledge base integrations for use in bots. A modular approach like this makes it easier to use a knowledge base for a specific purpose in a bot. Moreover, it yields a faster response during the conversation.
- Provide broad coverage within the knowledge base. The more diverse the content is, the more likely it is that the consumer’s query will be matched to an article.
- Keep the articles themselves short and focused.
- Evaluate whether long articles can be broken into smaller ones.
- If, during testing, you find there’s an article that isn’t returned by AI Search as the top answer, associate an intent with the article.
- Specify tags for articles: Tags are keywords, not sentences, that highlight the key noun(s) or word(s) in the article’s title, training phrases, or content. An article about health insurance might have the following tags: health, insurance, benefits. To increase the accuracy of search results, add tags.
- Take advantage of categories: You can add a category to an article, so you can find articles based on category. If the knowledge base has many articles, this makes that task easier.
Titles and training phrases
- Use full sentences, e.g., “How do I reset my password?”
- Follow the Intent Manager best practices for creating training phrases.
Summary and details
Keep these as brief as possible. The Summary section should be no longer than 120 words. The Detail section also should be short.
Very long pieces of text will be split into multiple messages (after 1000 characters) when sent to the consumer, and in rare cases the messages can be sent in the wrong order.
If you need to use a long piece of text, you can use the breakWithDelay tag to force the break at a specific point. Alternatively, you can override the behavior to break the text using the setAllowMaxTextResponse scripting function.
Best practices for enriched answers via Generative AI
Using enriched answers created via Generative AI? Concerned about hallucinations? Consider turning on enriched answers in bots that aren’t consumer-facing first (e.g., in a support bot for your internal field team), or in Conversation Assist first. An internal bot is still an automated experience, but it’s safer because the conversations are with your internal employees. Conversation Assist is a bit more consumer-facing, but it has an intermediary safety measure, namely, your agents. They can review the quality of the enriched answers and edit them if necessary, before sending them to consumers. Once you’re satisfied with the results in these areas, you can add support in consumer-facing bots.
More Generative AI best practices
See the relevant section for your Generative AI use case:
- Best practices for usage in Conversation Assist
- Best practices for usage in Conversation Builder bots
Best practices for internal knowledge bases
To promote best practices, limits are enforced for the number of articles, the length of fields, and so on.
Number of articles
- A good guideline is 75-100 articles in a knowledge base. Keep in mind that every article requires some level of training if you’re going to use NLU.
- If you have a knowledge base that exceeds 75-100 articles, consider splitting the knowledge base into smaller ones based on category, likewise splitting the intents into domains based on category, and adding multiple knowledge base integrations. Then have the NLU match the consumer’s question to the category-based intent and search the applicable knowledge base. This yields a faster response during the conversation.
If your internal knowledge base uses Knowledge Base intents, which is a legacy feature, behind the scenes the LivePerson (Legacy) engine is used for intent matching. For better performance and a more scalable solution, LivePerson recommends that you convert from Knowledge Base intents to Domain intents as soon as possible. This allows you to associate a domain that uses the LivePerson engine (or a third-party engine). There are many benefits of LivePerson over LivePerson (Legacy).
The above said, you can also use our powerful AI Search instead of Natural Language Understanding. It's ready out of the box. No setup required. No intents required. Learn about search methods.
Best practices for external knowledge bases
- Before you begin, ensure the content in the CMS is appropriate for conversational AI.
- When you add an external knowledge base, configure it to use LivePerson AI whenever possible (learn why). Also, when you create an external knowledge base that uses LivePerson AI, consider configuring it so that fetched content is cached at run time, for improved performance.