***
title: The Golden Rule
position: 0
excerpt: >-
Never execute two action-based activities in a row without collecting a slot
in between.
deprecated: false
hidden: false
metadata:
title: The Golden Rule | Agent Studio Best Practices
description: >-
The single most important architecture principle in Agent Studio: never
chain action activities without a slot barrier. Learn why it exists, what
breaks when you violate it, and how compound actions fix it.
robots: index
-------------
> Never execute two action-based activities in a row in a conversation process without collecting a slot in between.
That's the rule. If you remember one thing from this page, remember that. Every other best practice in Agent Studio is downstream of this principle.
## Why This Rule Exists
Every action activity in a conversation process returns output that gets added to the reasoning engine's [context window](/agent-studio/agentic-ai/llm-fundamentals/context-windows). That output stays in the window for the rest of the conversation. The engine reads all of it on every subsequent turn to decide what to say and do next.
Chain two actions back-to-back, and both outputs sit in the window. Chain four, and you have four full API responses competing for the engine's limited [attention budget](/agent-studio/agentic-ai/llm-fundamentals/attention-and-limitations).
Here's what happens:
| Chained Actions | Estimated Context Growth | Latency Impact |
| --------------- | ------------------------ | -------------- |
| 1 | \~2-5 KB | Baseline |
| 2 | \~4-10 KB | Noticeable |
| 3 | \~6-15 KB | Significant |
| 4 | \~8-20 KB | Doubles |
The reasoning engine processes the entire context window on every turn. More tokens means more computation and more latency. But the performance hit isn't just about speed.
## The Lost in the Middle Problem
The reasoning engine doesn't read the context window with equal attention everywhere. Research shows that attention follows a [U-shaped curve](/agent-studio/agentic-ai/llm-fundamentals/attention-and-limitations#the-u-shaped-attention-curve): strong at the beginning, strong at the end, weak in the middle.
When you chain three actions:
* **Action 1's output** sits near the beginning of the action history. Decent attention.
* **Action 2's output** lands in the middle. Weak attention — roughly 20% recall.
* **Action 3's output** is near the end. Decent attention.
The engine is literally more likely to hallucinate about the second action's data than the first or third. You'll see it pick the wrong field, misinterpret a value, or ignore the data entirely. And the bugs are hard to trace because they look like random reasoning failures, not positional attention problems.
## What a Violation Looks Like
Here's a meeting scheduler plugin that chains four action activities. Each action depends on the previous one's output, and all four responses dump into the context window.
Conversation processes are built using Agent Studio's visual editor. The YAML below is a codified representation of what you'd configure in the UI.
```yaml maxLines={20}
# Conversation Process: schedule_meeting
activities:
- action_activity:
action_name: lookup_user_calendar
required_slots: [organizer]
input_mapper:
user_id: data.organizer.id
output_key: calendar_info
- action_activity: # Action right after action
action_name: fetch_room_list
input_mapper:
building: data.calendar_info.default_building
output_key: rooms
- action_activity: # And another
action_name: check_room_capacity
input_mapper:
rooms: data.rooms.available
headcount: data.attendees.$LENGTH()
output_key: suitable_rooms
- action_activity: # And another
action_name: reserve_room_and_create_event
input_mapper:
room: data.suitable_rooms[0]
attendees: data.attendees
```
Four actions chained. Each returns 2-5 KB. The reasoning engine's context balloons to 20 KB+ of intermediate data it doesn't need. Response time goes from 2s to 8s. The user stares at a spinner.
## The Fix: Compound Actions
[Compound actions](/agent-studio/actions/compound-actions) move the chain out of the conversation process. The intermediate results stay outside the context window entirely. The reasoning engine only sees the final, clean output.
### Move the chain into a compound action
Take the sequence of actions that were chained in the conversation process and group them into a single compound action. Each step executes in order, but the intermediate results never touch the reasoning engine's context.
```yaml
# Compound Action: prepare_meeting_room
steps:
- action:
action_name: lookup_user_calendar
input_mapper:
user_id: data.organizer.id
output_key: calendar_info
- action:
action_name: fetch_room_list
input_mapper:
building: data.calendar_info.default_building
output_key: rooms
- action:
action_name: check_room_capacity
input_mapper:
rooms: data.rooms.available
headcount: data.headcount
output_key: suitable_rooms
- return:
output_mapper:
recommended_room: data.suitable_rooms[0].name
room_capacity: data.suitable_rooms[0].capacity
alternatives_count: data.suitable_rooms.$LENGTH() - 1
```
The `return` step is where the magic happens. Instead of dumping three full API responses into the context, the compound action returns exactly three fields. The reasoning engine sees `recommended_room`, `room_capacity`, and `alternatives_count`. Nothing else.
### Use the compound action in a clean conversation process
Now the conversation process has one action activity that encapsulates the entire chain. The second action has its own `required_slots`, which forces the engine to stop and collect them before proceeding — that's the slot barrier.
```yaml
# Conversation Process - Clean
activities:
- action_activity:
action_name: prepare_meeting_room
required_slots: [organizer, attendees]
output_key: room_prep
confirmation_policy: true
- action_activity:
action_name: create_calendar_event
required_slots: [meeting_title, meeting_start_time, duration_minutes]
input_mapper:
title: data.meeting_title
room: data.room_prep.recommended_room
attendees: data.attendees
output_key: created_event
confirmation_policy: true
```
### Verify the context savings
Before: four actions dumping \~20 KB of intermediate JSON into the context window.
After: one compound action returning \~200 bytes of curated fields. That's a **99% reduction** in context usage from the action chain.
## The Role of confirmation\_policy
Notice `confirmation_policy: true` on both action activities above. This gives the user a checkpoint — they see the result and confirm before the action fires. It's useful when you want the user to verify data before the process continues (e.g., "I found Room 4A with capacity for 8. Want me to book it?").
The slot barrier itself comes from `required_slots`. When the next action activity has required slots, the engine has to stop and collect them before proceeding. That's the natural breakpoint that satisfies the Golden Rule.
`required_slots` is what creates the stop point between action groups. `confirmation_policy` adds a user verification step before the action executes — useful, but separate from the Golden Rule's slot barrier requirement.
## Rule of Thumb
If your conversation process has two action activities back-to-back, refactor. Move the chain into a [compound action](/agent-studio/actions/compound-actions), use `return` with an `output_mapper` to curate the fields the reasoning engine actually needs, and add `confirmation_policy` between action groups.
The context window is a finite resource. Treat it like one.
How the context window fills up and why every token has a cost.
The U-shaped attention curve and Lost in the Middle phenomenon.
Full reference for building compound actions with steps, return, and output mapping.
The full reasoning cycle and how the engine manages context.