Introduction
Scenarios are the backbone of every simulation. They define what the synthetic customer wants, why they are contacting your brand, and how success is measured. A well-crafted scenario produces realistic, useful conversations; a vague one produces noise. Follow the best practices below.
Description
- Write in second person ("You"). For example, "You purchased a router last week, and it keeps dropping connection," not, "The customer has a router issue."
- Clearly state the customer's goals. The customer's goals, not the agent's goals, go here. For example, "You want a full refund or a replacement shipped overnight."
- Include context, such as what happened, what you've already tried, product names, plan names, etc. Abstract descriptions ("You have a problem with your service.") produce vague behavior from the synthetic customer - be descriptive!
- Get detailed when applicable: If your use case relies on specific location data (e.g., a destination airport), or other information (e.g., an order number), be sure to include this data in the description.
- Keep it relatively short. The scenario's description should be 3-6 sentences in length. This is enough detail without overwhelming the LLM.
- Use relative dates, such as "two days ago," "last Monday," "a couple of weeks ago." Don't use absolute dates (“March 3”) because they quickly become stale.
Example description
You signed up for the Premium Plan a couple of weeks ago. You were charged $49.99 but your account still shows the Free Plan features. You've already tried logging out and back in. You want this resolved immediately — either activate the features you paid for or give you a full refund.
Agent goals
-
Be specific and observable. Each criterion should describe something an assessor can verify from the transcript alone. A good example is, "Agent confirmed the customer's order number before proceeding." A bad example is, "Agent was helpful."
Refrain from entering agent goals that apply to multiple scenarios. Such goals belong in scorecards because they're not scenario-specific. That said, in the current release, scorecards aren't customizable. Stay tuned for this feature!
- Align with your brand's actual process. The goals should mirror the steps your agents are trained to follow: verification, troubleshooting, resolution, wrap-up, etc.
- Write independent goals. Each goal should stand on its own. Avoid goals that only make sense if a previous goal passed.
- Keep the count manageable. Three (3) to seven (7) goals per scenario is typical. More than seven makes it noisy and harder for the AI to score reliably.
Skill selection
- Always assign the correct skill. Conversations route to your agents based on the skill. A mismatch means conversations land in the wrong queue, rendering your performance data inaccurate.
- Use different skills to segment tests. If you want to test your Billing AI agent and your Sales AI Agent separately, create separate scenarios that use distinct skills rather than mixing them in one simulation.
Scenario maintenance
- Review and update scenarios quarterly (or whenever processes change). Stale scenarios with outdated steps or goals produce misleading results.
- Use tags to organize. Group by department, use case, or difficulty level. This is especially useful when your library grows past 20–30 scenarios.
- Version through naming. If you significantly rework a scenario, consider naming the second iteration with a version indicator ("Billing Dispute v2") rather than overwriting the first iteration.