Introduction

When writing the instructions in a prompt, you can reference variables to dynamically include relevant info. Key benefits include:

  • Contextual understanding: Variables can help provide context to the LLM. By passing contextual info through variables, you help the model better understand the context of the conversation and generate a more coherent and contextually relevant response.
  • Dynamic content generation: Including variables lets you generate dynamic content that’s customized based on consumer inputs and changing conditions. The prompt can adapt to specific contexts.
  • Personalization: Variables enable personalization of responses. By injecting consumer-specific data into prompts, you can create tailored responses that are more relevant and engaging for consumers, improving the experience.
  • Efficiency: Variables streamline interactions by reducing the need for repetitive prompts. Instead of writing out multiple variations of a prompt, you can use variables to fill in the specific details.

The Variables tab within a prompt, which you use to add and remove variables from the prompt

Types of variables

Variables fall into two categories:

  • Client type-specific variables
  • Custom variables

Client type-specific variables

Some variables make sense in some use cases but not in others. So, the prompt’s Client type determines the set of client type-specific variables that are available.

A callout to the client type-specific variables in a prompt

Custom variables

Custom variables include a dynamic list of variables that are defined by you.

A callout to the custom variables in a prompt

Custom variables are supported only if the prompt’s Client type is one related to LivePerson Conversation Builder:

So, if you’re working on a prompt for use in a bot, you can include any variable that’s set in the bot's botContext. This includes the variables defined in the __initConversation method in the bot’s global functions and any session-scoped variables defined elsewhere in the bot.

Use of botContext variables in a prompt relies entirely on your knowledge of the Conversation Builder bot and the variables used in the bot.

When entering the variable into the prompt, take care to reference it by its exact name, using this exact syntax and case: {botContext_variablename}

Use an underscore after botContext, not a period. This is necessary because the period is recognized by the LLM as a delimiting character, which isn't desirable.

Best practices

  • There is no validation done by the system with regard to variable usage. Take care to insert only variables that are relevant for the given use case. Also take care not to remove variables that are required and important for the use case.
  • Consider specifying default values for custom variables that store dynamic data. This is a failsafe mechanism for times when the variable’s value can’t be resolved at runtime due to an issue.
  • Test thoroughly.

PCI and PII masking

Learn about PCI and PII masking.

Variables for automated conversation summaries

If you’re working on a prompt to generate conversation summaries from the LLM, the following client type-specific variables are available:

  • {language}: Optional. Include this variable to generate summaries in a specific language. If you omit this variable, the summaries are generated in English.
  • {text}: Required. At runtime, this is resolved to include the conversation's transcript.

Don’t ever remove the {text} variable. If you do, conversation transcripts won't reach the LLM, leading to inaccurate summaries. We expose the variable in the UI, so you can 1) locate it in the prompt wherever you require, and 2) easily add it back if you inadvertently delete it.

Learn more about {language} and {text}.

Variables for KnowledgeAI agents

If you’re working on a prompt for a KnowledgeAI agent, the following client type-specific variables are available:

  • {brand_name}
  • {brand_industry}
  • {knowledge_articles_matched}

{brand_name} and {brand_indusry}

We recommend that you follow the example in our prompt templates and use {brand_name} and {brand_industry} in your prompts. Research by our data scientists has revealed that this helps the response to stay in bounds, i.e., specific to your brand, with fewer hallucinations.

When you activate our Generative AI features in the Management Console, we ask you to specify your brand name and industry for this reason. At runtime, the values that you specified are used as the values of these variables. Return to the Management Console to change the values at any time.

{knowledge_articles_matched}

At runtime, this variable is resolved to include the list of articles that matched the consumer's query. Learn about the matched articles.

Research indicates that the positioning of this variable can strongly influence the generated response, so place this variable wherever you require. Many brands put this variable at the end of the prompt.

Don’t remove the {knowledge_articles_matched} variable. If you do, KnowledgeAI's answer enrichment service won't work. We expose the variable in the UI, so you can 1) locate the variable in the prompt wherever you require, and 2) easily add it back if you inadvertently delete it.

Variables for Routing AI agents

If you’re working on a prompt for a Routing AI agent, there's just one client type-specific variable available:

  • {intent_list}

{intent_list}

At runtime, this variable is resolved to include the list of routes (their names and descriptions) that are defined in the Guided Routing interaction.

Don’t remove the {intent_list} variable. If you do, the Guided Routing interaction won't work. We expose the variable in the UI, so you can 1) locate the variable in the prompt wherever you require, and 2) easily add it back if you inadvertently delete it.

Insert a client type-specific variable

  • Click the copy icon Copy icon beside the variable. Then paste it into the prompt.

    A callout to the copy icon and how it can be used to copy a variable and then paste it into the system prompt

Remove a client type-specific variable

  • Delete it from the prompt manually.

The variable always remain in the list on the left, so you can add it back later if desired.

Insert a custom variable

  • Manually enter the variable name in the prompt. Encapsulate the name with opening and closing curly braces, i.e., { and } . This adds the variable to the Custom section on the left:

Adding a custom variable to a prompt

There is no validation performed when entering custom variables. Use the prescribed syntax, and reference the correct variable name exactly.

After a custom variable is added to the list on the left, it can be copied from the left and pasted into the prompt. In this way, you can quickly reuse the variable.

Remove a custom variable

Adding a custom variable to a prompt

  • If you delete the variable from the list on the left, this removes all instances of it in the prompt.
  • If you delete a single instance within the prompt, but more instances of it exist, the variable remains in the list on the left. Once you remove the last instance from the prompt, the variable is removed from the list.

Specify a default value and description for a custom variable

You can optionally specify a default value and a description for a custom variable.

  • Default value: A default value is used in situations where the variable's actual value can't be resolved at runtime due to some issue. For example, perhaps there's a typo in the variable's name. Such situations can render a prompt unusable by the LLM, so we recommend that you specify default values for variables that store dynamic data, as a failsafe technique. The Default value field accepts all characters (even URLs are allowed), but not HTML tags.
  • Description: A meaningful description can be helpful for others to understand a variable's purpose.

To specify a default value and/or a description, click the custom variable in the list on the left, and specify the values in the window that's opened:

Adding a default value and description to a custom variable