Overview
SuperBryn uses LLM-powered generation to create realistic test scenarios. Instead of writing test cases manually, you select a ring and path, and the AI generates scenarios grounded in your agent’s call flow, policies, and knowledge base.How to generate
- Go to the Evaluators page
- Click Generate Scenarios
- Select:
- Ring: Which ring to generate for (1-8)
- Path: Which call flow path to target
- Variant (Ring 2 only): Which policy rule to test
- Set the number of scenarios to generate
- Click Generate — results stream in real-time via SSE
What gets generated
Each scenario contains three fields, generated sequentially:| Field | Description |
|---|---|
| User Perspective | A second-person description of what the caller does and experiences |
| Expected Outcome | Step-by-step expected agent behavior |
| Intent | What the caller wants, from the agent’s perspective |
What feeds into generation
The AI prompt is assembled from multiple sources:| Source | Purpose |
|---|---|
| Ring-specific prompts | System/user prompt templates per ring (customizable) |
| Call flow path | The specific flow path being tested |
| Variant context | (Ring 2) The policy rule and risk being tested |
| Knowledge base | Domain-specific content from uploaded documents |
| Few-shot examples | Sample outputs that guide format and quality |
| Agent type | Inbound vs. outbound context |
| Industry | Industry-specific framing |
| Word limits | Configurable max length per field |
Reviewing and editing
Generated scenarios appear in the evaluator list. You can:- Edit any field to refine the scenario
- Delete scenarios that aren’t relevant
- Regenerate if the output doesn’t match your needs

