Introduction

After you've added your content, tune the knowledge base for optimal performance:

  1. Perform a search using an authentic consumer query.
  2. Review the results: Check the titles and contents of the matched articles, as well as their scores, to make sure everything looks reasonable.
  3. Take steps to improve the performance:

    • Article content: Refine the article content to improve the article matching via AI Search. If you have a consumer query for which there isn’t a relevant article to serve as the answer, just add that article. While it’s unlikely that an existing, relevant article won’t yield results, it might happen. In this case, improve the article’s title and/or add tags to the article.
    • Associated intents: If you've tied the articles to intents, refine the intents to improve the article matching via intent matching. You do this by adjusting the training phrases for the intent.
    • Configuration: Adjust the answer threshold or number of results.
    • Prompt: If you're using Generative AI, refine the prompt that's provided to the LLM service in order to improve the response.

    Learn about search methods (AI Search, Intent Match).

    Learn about KnowledgeAI best practices.

    Learn about prompt writing best practices.

Test a single knowledge base (Answer Tester)

Use this process to test the performance of a consumer utterance in a single knowledge base.

  1. Access KnowledgeAI.
  2. Open the knowledge base, and click Articles in the menu in the upper-left corner.
  3. In the Answer Tester on the right, specify the following:

    The default state of the Answer Tester

    • Question: Enter an authentic consumer query for which you want to find matching articles.
    • Retrieve answers by: Select the type of search to perform.
    • Threshold: Select the confidence threshold that an article must meet for it to be returned as a result.
    • # of results: Select how many results to return.
    • Article status: Select the status of the artice, either Active, Inactive, or All. This option is only available for internal knowledge bases.
    • Enriched answers via Generative AI: Turn this on to test the call to the LLM service for an enriched answer. The top matched articles are sent to the LLM to generate a single enriched answer, which is returned in the Summary field of the best matched article. Alternatively, turn this off to omit the call to the LLM service and test only the article matching. Learn about answers enriched via Generative AI.
    • Enrichment prompt: Required when you turn on Enriched answers via Generative AI. When the consumer’s query is matched to articles in the knowledge base, this is the prompt to send to the LLM service. The prompt instructs the LLM service on how to use the matched articles to generate an enriched answer.
    • No Article Match prompt: Optional when you turn on Enriched answers via Generative AI. When the consumer’s query isn’t matched to any articles in the knowledge base, this is the prompt to send to the LLM service. The prompt instructs the LLM service on how to generate a response. If you don’t select a No Article Match prompt, then if a matched article isn’t found, no call is made to the LLM service for a response.

      Using this prompt can offer a more fluent and flexible response that helps the user refine their query, and it means that small talk is supported. However, the prompt can yield answers that are out-of-bounds. The model might hallucinate and provide a non-factual response in its effort to generate an answer using only the memory of the data it was trained on.

  4. Click Get Answers.
  5. Review the results under Matched answers.

    The results of an example test using the Answer Tester

  6. You can click on an article title to see the article, and toggle between this and its JSON.

    Viewing the article info and JSON

Test multiple knowledge bases (Test & Tune)

Use this process to test the performance of a consumer utterance across several knowledge bases.

  1. Access KnowledgeAI.
  2. Click Test & Tune in the upper-left corner.
  3. Click the Test tab.
  4. Specify the following:

    The default state of the Test & Tune testing tool

    • Question: Enter an authentic consumer query for which you want to find matching articles.
    • Retrieve answers by: Select the type of search to perform.
    • Threshold: Select the confidence threshold that an article must meet for it to be returned as a result.
    • # of results: Select how many results to return.
    • Article status: Select the status of the artice, either Active, Inactive, or All. This option is only available for internal knowledge bases.
    • Language: Use this to filter the list of knowledge bases that are available for selection. For example, if you want to select one or more Spanish-language knowledge bases in the Knowledge base field, select "Spanish" here.
    • Knowledge base: Select the knowledge bases to search. You can select up to five.
    • Enriched answers via Generative AI: Turn this on to test the call to the LLM service for an enriched answer. The top matched articles are sent to the LLM to generate a single enriched answer, which is returned in the Summary field of the best matched article. Alternatively, turn this off to omit the call to the LLM service and test only the article matching. Learn about answers enriched via Generative AI.
    • Enrichment prompt: Required when you turn on Enriched answers via Generative AI. When the consumer’s query is matched to articles in the selected knowledge bases, this is the prompt to send to the LLM service. The prompt instructs the LLM service on how to use the matched articles to generate an enriched answer.
    • No Article Match prompt: Optional when you turn on Enriched answers via Generative AI. When the consumer’s query isn’t matched to any articles in any of your selected knowledge bases, this is the prompt to send to the LLM service. The prompt instructs the LLM service on how to generate a response. If you don’t select a No Article Match prompt, then if a matched article isn’t found, no call is made to the LLM service for a response.

      Using this prompt can offer a more fluent and flexible response that helps the user refine their query, and it means that small talk is supported. However, the prompt can yield answers that are out-of-bounds. The model might hallucinate and provide a non-factual response in its effort to generate an answer using only the memory of the data it was trained on.

  5. Click Get Answers.
  6. Review the results:

    The results of an example test using the Test & Tune testing tool