Skip to main content

Overview

SuperBryn uses LLM-powered generation to create realistic test scenarios. Instead of writing test cases manually, you select a ring and path, and the AI generates scenarios grounded in your agent’s call flow, policies, and knowledge base.

How to generate

  1. Go to the Evaluators page
  2. Click Generate Scenarios
  3. Select:
    • Ring: Which ring to generate for (1-8)
    • Path: Which call flow path to target
    • Variant (Ring 2 only): Which policy rule to test
  4. Set the number of scenarios to generate
  5. Click Generate — results stream in real-time via SSE

What gets generated

Each scenario contains three fields, generated sequentially:
FieldDescription
User PerspectiveA second-person description of what the caller does and experiences
Expected OutcomeStep-by-step expected agent behavior
IntentWhat the caller wants, from the agent’s perspective

What feeds into generation

The AI prompt is assembled from multiple sources:
SourcePurpose
Ring-specific promptsSystem/user prompt templates per ring (customizable)
Call flow pathThe specific flow path being tested
Variant context(Ring 2) The policy rule and risk being tested
Knowledge baseDomain-specific content from uploaded documents
Few-shot examplesSample outputs that guide format and quality
Agent typeInbound vs. outbound context
IndustryIndustry-specific framing
Word limitsConfigurable max length per field

Reviewing and editing

Generated scenarios appear in the evaluator list. You can:
  • Edit any field to refine the scenario
  • Delete scenarios that aren’t relevant
  • Regenerate if the output doesn’t match your needs

Batch generation

You can generate multiple scenarios in a single batch. Scenarios are generated concurrently (up to 5 in parallel) for faster throughput.