Conversation Process Best Practices
Four conversation process anti-patterns that break in production and how to fix them.
Conversation processes wire together slots and actions into multi-turn flows. The patterns here are the ones that pass local testing but break once real users and real data hit them. Each one looks harmless in the editor — the damage shows up in production.
The YAML examples on this page represent conversation process configurations. In practice, you build these in the Agent Studio visual editor — the YAML is the underlying representation of what the editor produces.
Looking for the chained-actions pattern? That’s covered in The Golden Rule — the single most important architecture principle in Agent Studio.
Async Output Referenced Later
This is the most common silent failure in conversation processes. It passes every test because timing works differently in the editor.
Only compound actions can run asynchronously — wait_execution is a compound action setting. When you use a compound action as an activity inside a conversation process, it defaults to synchronous (wait_execution: true). Setting it to false makes it fire-and-forget.
What it looks like
Why it breaks
When wait_execution: false:
- The action is dispatched to the backend
- Execution continues immediately to the next activity
- The output is empty — the action hasn’t completed yet
You can’t reference output from an async action because it doesn’t exist at the time the next activity runs.
Real consequence
data.analytics_result.tracking_id resolves to undefined. The output mapper fails silently or returns null. The user sees a broken response with a missing tracking ID — or the entire downstream activity fails with no useful error message.
The fix
Either make it synchronous if you need the output:
Or don’t reference the output if you truly want fire-and-forget:
Rule of thumb: If any later activity reads data.<output_key>.* from a compound action, that compound action must have wait_execution: true. Async is only for side effects you never need to read downstream.
No Confirmation on Destructive Actions
The reasoning engine interprets user intent. It does not always interpret it correctly. Without a confirmation step, there’s no checkpoint between “engine decided” and “action executed.”
What it looks like
Why it breaks
Users are ambiguous. The reasoning engine resolves ambiguity with its best guess. Without confirmation:
- The wrong item gets deleted
- A meeting is scheduled for the wrong time or wrong people
- There’s no chance to catch the error before an irreversible action executes
Real consequence
User: “Delete my old project” Engine interprets “old project” as “Current Project” (the most recently discussed one). Action: Deletes Current Project. User: “Wait, not that one!” Too late. Data gone. No undo.
The fix
Rule of thumb: Any action that creates, updates, or deletes data should have confirmation_policy: true. The only exception is read-only lookups and logging side effects.
system_instructions in Output Mapper
This one is tempting. Your action returns a big response payload and you only want to show a few fields to the user. So you ask the LLM to pick them out. The problem is that output mappers are supposed to return structured data, not delegate formatting to an LLM.
What it looks like
Say your HTTP action returns this payload:
You only want the user to see the ticket ID, status, assignee, and SLA deadline. Instead of mapping those fields directly, you use system_instructions to ask the LLM to extract them:
Why it breaks
system_instructions triggers an LLM call to generate the value. This is:
- Non-deterministic — different output each time, different field ordering, different phrasing
- Slow — adds an LLM round-trip to every execution
- Expensive — burns tokens on formatting instead of reasoning
- Unreliable — the LLM might hallucinate fields, omit fields, or reformat dates incorrectly
Real consequence
Sometimes the user sees: “Ticket INC-4821, Status: Open, Assigned to Jane Kim, SLA: March 17” Sometimes: “Here’s a summary: The ticket is open and assigned to Jane from Infrastructure.” Sometimes: The LLM includes the full description you didn’t want, or drops the SLA field entirely.
The output is different every time. You can’t build reliable downstream logic on non-deterministic data.
The fix
Map the exact fields you need directly:
The reasoning engine already knows how to present structured data to users conversationally. That’s its job — don’t duplicate it in the output mapper with an LLM call.
When is system_instructions appropriate in an output mapper? Only for cosmetic adjustments that don’t affect the structural output — adjusting tone, adding formatting like bold or italics, or rephrasing a value for readability. If you’re selecting, filtering, or transforming fields from a response, use direct field mapping instead.
RENDER with Mustache Loops
Mustache templates in output mappers seem like a clean way to format lists. In practice, they break in ways that are hard to diagnose.
What it looks like
Why it breaks
- Mustache rendering is non-deterministic in this context
- The rendered output is only visible if the next activity has
confirmation_policy: true - Complex logic in templates is hard to debug — no error messages, just missing data
Real consequence
You expect:
You get:
Names are missing because context binding failed silently. Or the entire rendered string is empty because the next activity doesn’t have confirmation enabled.
The fix
Return structured data using a data mapper:
Let the reasoning engine format the list for display. It handles internationalization, markdown formatting, and conversational tone — things a Mustache template can’t adapt to.
Data mapper functions like MAP(), FILTER(), and SORT() handle iteration and transformation without the fragility of template rendering. See the Data Mapper in HTTP Actions reference and the decision frameworks guide for when to use DSL, data mappers, or Python.
Quick Reference
The single most important CP architecture principle — never chain actions without a slot barrier.
Six slot anti-patterns that break in production: bloated descriptions, wrong types, missing validation.
Six decision trees for action types, slot types, CP vs CA, LLM vs DSL, and data transforms.
When and how to use LLM actions — the right place for system_instructions.