Is there a limit on the number of knowledge bases that can exist within in an account?
No, there is no limit.
Can you describe the enrichment flow and custom processing flow in their various combinations?
Sure. You can use neither answer enrichment nor custom processing (LLM processing of answers), one or the other, or both.
Neither enrichment nor custom processing
This is a simple fetch and return flow; there’s no special processing involved.
- KnowledgeAI performs the search.
- KnowledgeAI fetches N matched articles per knowledge base.
- KnowledgeAI returns all of the articles to the client application: Conversation Assist or the Conversation Builder bot.
- The client application uses them as per their own requirements.
Enrichment but no custom processing
- KnowledgeAI performs the search.
- KnowledgeAI fetches N matched articles per knowledge base.
- KnowledgeAI sends all of the articles to the LLM, so it can use them to generate an enriched answer. The LLM generates one enriched answer per knowledge base. The enriched answer is populated in the
summary
of the top matched article. - KnowledgeAI returns all of the articles to the client application: Conversation Assist or the Conversation Builder bot.
- The client application uses them as per their own requirements.
If the output of step 3 is malformed due to an error, the input to step 3 is used in step 4.
Custom processing but not enrichment
Custom processing is only available for Conversation Assist.
- KnowledgeAI performs the search.
- KnowledgeAI fetches N matched articles per knowledge base.
- KnowledgeAI sends all of the articles to the LLM, so it can perform custom processing.
- KnowledgeAI returns all of the articles to the client application: Conversation Assist.
- Conversation Assist uses them as per its own requirements.
If the output of step 3 is malformed due to an error, the input to step 3 is used in step 4.
Both enrichment and custom processing
Custom processing is only available for Conversation Assist.
- KnowledgeAI performs the search.
- KnowledgeAI fetches N matched articles per knowledge base.
-
KnowledgeAI sends all of the articles to the LLM, so it can use them to generate an enriched answer. The LLM generates one enriched answer per knowledge base. The enriched answer is populated in the
summary
of the top matched article.The entire output of the previous step is the input for the next step.
- KnowledgeAI sends all of the articles to the LLM, so it can perform custom processing.
- KnowledgeAI returns all of the articles, including the enriched answer, to the client application: Conversation Assist.
- Conversation Assist uses them as per its own requirements.
If the output of step 3 is malformed due to an error, the input to step 3 is used in step 4. Similarly, if the output to step 4 is malformed, the input to step 4 is used in step 5.
Related info
- Learn how Conversation Assist takes the articles from KnowledgeAI and decides which to offer as answer recommendations to agents.
- Learn how a Conversation Builder bot—that integrates KnowledgeAI but doesn’t use Generative AI—takes the articles from KnowledgeAI and decides which to send as answers to the consumer.
- Learn how a Conversation Builder bot—that integrates KnowledgeAI and uses GenerativeAI to enrich answers—takes the articles from KnowledgeAI and decides which to send as answers to the consumer.
Does using answer enrichment and/or custom processing slow down response times?
Yes, usually. Whenever the system needs to ask an LLM for a response, it takes some time. This means you might notice increased latency based on the number of LLM calls that you’re making. Always test thoroughly.
For the above reasons, also consider carefully whether to use these flows in the Voice channel.