Prompts

In the context of Large Language Models (LLMs), the prompt is the text or input that is given to the model in order to receive generated text as output. Prompts can be a single sentence, a paragraph, or even a series of instructions or examples.

The LLM uses the provided prompt to understand the context and generate a coherent and relevant response. The quality and relevance of the generated response depend on the clarity of the instructions and how well the prompt conveys your intent.

The interaction between the prompt and the model is a key factor in determining the accuracy, style, and tone of the generated output. It's crucial to formulate prompts effectively to elicit the desired type of response from the model. See our best practices for writing prompt instructions.

Prompt Library

The Prompt Library is the user interface that you use to select and manage prompts. It's a shared interface: Regardless of how you access it, it’s the same library that you see.

A view of the Prompt Library with the account's prompts listed

Prompt templates

Prompt templates are essentially prompts that are created, tested, and maintained by LivePerson. They’re intended to get you up and running quickly, so you can explore a Generative AI solution.

Whenever you select to use a prompt template, a deep copy is presented. You can customize it if desired. Once it's saved and selected, the deep copy is added to My Prompts in the Prompt Library. This is so that subsequent changes we make to the template don’t inadvertently affect the behavior of your solution.

A view of the Prompt Library with LivePerson's templates listed

Again, there’s no relationship between a prompt template that you’ve copied and your copy itself. Your copy is independent by design: We don’t want our updates to the prompt templates to affect your solution inadvertently.

Custom prompts

Custom prompts are prompts created by users of your account and visible only to the same. They’re tailored to suit your specific requirements. You can create custom prompts:

Variables in prompts

When writing the text or instructions for a prompt, you can reference variables to dynamically include relevant info.

Key benefits

  • Contextual understanding: Variables can help provide context to the LLM. By passing contextual info through variables, you help the model better understand the context of the conversation and generate a more coherent and contextually relevant response.
  • Dynamic content generation: Including variables lets you generate dynamic content that’s customized based on consumer inputs and changing conditions. The prompt can adapt to specific contexts.
  • Personalization: Variables enable personalization of responses. By injecting consumer-specific data into prompts, you can create tailored responses that are more relevant and engaging for consumers, improving the experience.
  • Efficiency: Variables streamline interactions by reducing the need for repetitive prompts. Instead of writing out multiple variations of a prompt, you can use variables to fill in the specific details.

Variables that you can use

  • {brand_name}
  • {brand_industry}
  • {brand_info} - Supported but not recommended
  • {$botContext.botVariableName} - For use in Conversation Builder bots only
{brand_name} and {brand_industry}

We recommend that you follow the example in our prompt templates and use {brand_name} and {brand_industry} in your prompts. Research by our data scientists has revealed that this helps the response to stay in bounds, i.e., specific to your brand, with fewer hallucinations.

When you activate our Generative AI features in the Management Console, we ask you to specify your brand name and industry for this reason. The values that you specify are used as the values of these variables. Return to the Management Console to change the values at any time.

{brand_info}

While we support this variable, we don’t recommend that you use it. Use {brand_name} and {brand_industry} instead.

Here’s the backstory on this variable: LivePerson made use of this variable early on its Generative AI offering to accommodate situations where some brands supplied their brand name and industry, but others didn’t. We needed a way to write prompts that accommodated both scenarios at once. Using {brand_info} met this need for efficiency. But now that you can create and edit your own prompts via the Prompt Library, LivePerson no longer has a need for this variable. Still, some brands might notice that it's used in some of their earliest prompts.

How this variable works: Suppose the prompt starts with, “You are an AI agent {brand_info}…” At runtime, if you have specified your brand name and industry in the Management Console, this is expanded to, “You are an AI agent for {brand_name} belonging to {brand_industry}…” And then {brand_name} and {brand_industry} are replaced by the actual values in the Management Console. Importantly, if you haven’t specified your brand name and industry in the Management Console, {brand_info} isn’t expanded. Instead, the prompt just starts with, “You are an AI agent.” Either way, the grammar is correct, and the meaning is clear.

As mentioned above, {brand_info} is supported but not recommended. You have control over the prompt text in your prompts, so use {brand_name} and {brand_industry} instead.

botContext variables

If the prompt will be used in a Conversation Builder bot that automates answers to consumers, you can include in the prompt a reference to any variable that’s set in the bot's botContext. Follow the prescribed syntax: {$botContext.botVariableName}

Learn how to set variables in the botContext.

Learn about PCI and PII masking.

Example

You are a customer service agent for {brand_name} belonging to {brand_industry}. You help users understand issues with using promotional codes. For every customer message, respond with information from the Knowledge Articles, and if you cannot find the information in the Knowledge Articles, say exactly "I'm sorry, I couldn't find any information about that. Is there anything else I can help you with today?"

\#\#\# INSTRUCTIONS

1. Always follow your instructions.
2. Your response should be at least 10 words and no more than 300 words.
3. When the question is related to the calculation for fees or prices: Your job is to create a mathematical assessment based on the facts provided to evaluate the final conclusion. Simplify the problem when possible.
4. Respond to the question or request by summarizing your findings from Knowledge Articles.
5. If the question is related to a specific product, service, program, or membership, first make sure that the exact name exists in the Knowledge Articles. Otherwise, say, "I can't find that information."

\#\#\# CUSTOMER INFORMATION

1. The customer attempted to use {$botContext.promoCode} to purchase {$botContext.itemsInCart} but received the following error message: {$botContext.errorCode}. Please use information about the promotion from the Knowledge Articles to help the customer.

Hallucinations

Hallucinations are situations where the underlying LLM service generates incorrect or nonsensical responses, or responses that aren't grounded in the contextual data or brand knowledge that was provided.

For example, suppose the consumer asks, “Tell me about your 20% rebate for veterans.” If the presupposition within that query (that such a rebate exists) is regarded as true by the LLM service, when in fact it is not true, the LLM service will hallucinate and send an incorrect response.

Be aware that all prompts have the potential for hallucinations. Typically, this happens when the model relies too heavily on its language model and fails to effectively leverage the provided source content. The degree of risk here depends on the prompt style that’s used. For example, if your solution uses answers that are enriched via Generative AI, consider these questions:

  • Does the prompt direct the service to respond using only the info in the matched articles?
  • Does the prompt direct the service to adhere to the script in the matched articles as much as possible?
  • Does the prompt direct the service to come up with answers independently when necessary (i.e., when no relevant articles are matched), using both the info in the conversation context and in its language model?

It’s important to carefully evaluate the prompts that you create regarding questions like those above. Strike a balance between constraint and freedom that aligns with the level of risk that you accept. And always test your prompts thoroughly before using them in Production.

Conversational Cloud's LLM Gateway has a Hallucination Detection post-processing service that detects and handles hallucinations with respect to URLs, phone numbers, and email addresses.

Prompt updates by LivePerson

LivePerson actively and rigorously tests all default prompts and prompt templates. When we identify an opportunity for improvement, we make an update. Regularly review the release notes in our Knowledge Center for info on changes to LivePerson default prompts and prompt templates.

Prompt updates by LivePerson shouldn’t affect your solution unless you’re using a default prompt that we’ve updated. To prevent your solution from being altered with you unaware, we strongly recommend you change these instances so that each uses one of your custom prompts.