*** title: Processing Tool Responses position: 2 excerpt: '' deprecated: false hidden: false metadata: title: '' description: >- How the reasoning engine processes plugin responses — variable passing, structured vs plaintext paths, and what it means for your output mappers. robots: index ------------- Every time a plugin action fires, its response passes through a processing step before the [reasoning engine](/agent-studio/agentic-ai/agentic-reasoning-engine/how-the-reasoning-engine-works) sees it. What the engine can do with that data depends entirely on the format of the response. This is [Step 5](/agent-studio/agentic-ai/agentic-reasoning-engine/how-the-reasoning-engine-works#how-the-engine-reasons) of the reasoning loop. ## What Happens to Tool Responses The engine checks every plugin response and routes it down one of two paths based on size and structure. If the action response is under the token threshold, it goes into the context window as-is. The engine sees the full payload and reasons over it directly. No transformation, no loss. When a response exceeds the threshold and contains valid structured JSON, the engine stores the full dataset as a named variable. It generates a truncated preview plus a JSON schema describing the structure, along with an instruction that the full data is available through the Code Interpreter. This keeps the context window from being flooded with thousands of tokens of raw API data while still giving the engine enough to reason about what came back. For responses above the [7K token threshold](/agent-studio/core-concepts/structured-data-analysis#critical-limits), this is what triggers [Structured Data Analysis (SDA)](/agent-studio/core-concepts/structured-data-analysis). Both the original action result and the engine's processed version get appended to the conversation history. The engine sees enough to decide what to do next without burning its entire context budget on one large payload. ## Variable Passing Variable passing is the mechanism the engine uses when structured data is too large for inline context. Here's the flow: 1. **Schema extraction.** The engine reads the JSON structure of the response and generates a JSON schema describing its shape, field names, and types. This schema becomes the engine's map of what data is available. 2. **Text truncation.** The full response gets cut down to a preview. The engine sees the first few records, enough to understand the data shape and content, but not the full dataset. 3. **Variable creation.** The engine stores the complete response as a named variable accessible to the Code Interpreter. The full dataset lives outside the context window but is available for code execution. 4. **Preview generation.** The truncated preview, the schema, and a reference to the named variable get packaged together. This is what actually enters the context window. 5. **Code Interpreter instruction.** The engine appends an instruction noting that the full dataset is available through the Code Interpreter plugin. When it needs to compute aggregations, filters, or analysis across the full data, it writes Python code that runs against the complete variable. This is the bridge between your plugin's raw output and [SDA](/agent-studio/core-concepts/structured-data-analysis). When SDA activates on a large dataset, it's working with the variable the engine created, not with whatever truncated preview it saw in context. Variable passing only works with structured JSON. If your action returns a large plaintext string, the engine can't extract a schema, can't create a variable, and can't route to the Code Interpreter. The entire string goes into context as-is. This is why structured output mappers matter at every layer. ## Structured vs Plaintext: Two Different Paths The format of your output mapper's response determines which processing path the engine takes. This has real consequences for what it can do with your data. **When your output mapper returns structured JSON**, the engine can parse it, extract a schema, sample a few objects, and send a compact representation to the reasoner. The full dataset gets stored as a variable. The engine can reason about the structure, and the Code Interpreter can run analysis against the complete data. **When your output mapper returns plaintext** (like a RENDER string), the engine can't parse it. It's an opaque blob. The entire string gets forwarded to the reasoner as-is, which bloats the context if there's a lot of data. No schema extraction. No variable passing. No Code Interpreter access. Here's what this looks like in practice. Consider a plugin that returns expense data. **Plaintext path (RENDER):** ```yaml output_mapper: expense_summary: | RENDER Here are your submitted expenses: {{#each data.expenses}} - {{this.description}}: ${{this.amount}} ({{this.category}}) {{/each}} Total: ${{data.total_amount}} ``` The engine receives a rendered string. It can't parse the individual expense records, can't create a variable, and can't route to the Code Interpreter. The full rendered text goes into context. If there are 500 expenses, that's 500 lines of text competing for the engine's attention budget. **Structured path (MAP):** ```yaml output_mapper: expenses: MAP(): items: data.expenses converter: description: item.description amount: item.amount category: item.category total_amount: data.total_amount expense_count: data.expenses.$LENGTH() ``` The engine receives clean JSON. It extracts the schema, creates a named variable with the full dataset, and sends a truncated preview to the reasoner. If the user asks "what's my highest expense category?", the engine writes Python against the full variable instead of trying to scan 500 lines of rendered text. The YAML examples above show the same data through two different processing paths. The RENDER version produces a string the engine can't work with. The MAP version produces structured JSON the engine can parse, store, and route to the Code Interpreter. This is the architectural reason structured output mappers win: the platform can work with JSON intelligently, but plaintext bypasses that intelligence entirely. ## display\_instructions\_for\_model There's one special key the engine handles differently: `display_instructions_for_model`. When this key is present in your output mapper's response, the engine extracts it and routes it as a system-level directive to the reasoner. It doesn't get treated as data. It gets treated as instructions for how to interpret the data. This is how you steer [SDA's analysis behavior](/agent-studio/core-concepts/structured-data-analysis#steering-sda-with-instructions) without polluting the dataset itself. The engine separates the instructions from the data before either one reaches the reasoner. ## Practical Implications Response processing is invisible infrastructure. You don't configure it, you don't see its logs, and you can't change its thresholds. But it shapes everything downstream. You can't configure how the engine filters or truncates tool responses. What you CAN control is giving it structured data it can work with. Return structured JSON from your output mappers. Use `MAP()` to flatten and rename fields. Include `display_instructions_for_model` for baseline analysis rules. The better your output shape, the more the engine can do for you. A few things to keep in mind: * **If the engine isn't referencing specific fields from a large response**, it likely truncated the preview. The engine is working from a summary, not the full payload. Design your outputs to put the most important data first, or use compound actions to pre-filter before the response enters processing. * **If SDA isn't activating**, your output might be under the 7K token threshold, or it might not be valid structured JSON. Check both. Data embedded inside a string field (`"data": "{\"accounts\": [...]}"`) won't trigger the structured path. ## What to Read Next How SDA activates on large datasets, the 7K token threshold, and designing your output for code execution. The full reasoning loop, context window assembly, and why attention matters for your plugins. How individual prediction cycles chain into planning, execution, and user feedback loops. MAP(), RENDER, FILTER(), and the full data mapper syntax for shaping your action outputs.