*** title: Connect to other AI Agents position: 3 excerpt: '' deprecated: false hidden: false metadata: title: '' description: '' robots: index next: description: '' --------------- # Objective This cookbook is helpful if your goal is to make it possible to chat with a 3P agent or LLM **through** your Moveworks Assistant. # Use Cases You can do this for.. * **Other AI agents**: You can send a prompt to those agents and wait for them to do work (e.g. Workday's agent, M365 copilot). Note: This requires those vendors to expose a "Responses API", "Chat Completion API", or similar. * **RAG (Retrieval Augmented Generation) Systems:** In these models, the LLM is provided additional context in the prompt. * **In-House LLMs**: These are fine-tuned models trained on your data & come up with content based on its training data. * **Foundation Models:** These are general, off-the-shelf models, like ChatGPT, Claude, or other families of LLMs. If you're trying to connect to a Foundation Model like GPT, try our built-in plugin: [QuickGPT](https://www.moveworks.com/us/en/platform/quick-gpt). # Architecture & Implementation If you want to connect Moveworks to other AI agents & applications, you can do so through our [plugins](/agent-studio/core-concepts/assistants-agents-plugins#/). Your plugin will look something like this: ```mermaid sequenceDiagram participant User participant AI_Assistant as AI Assistant participant Claude_Plugin as Claude Plugin participant Anthropic_API as Anthropic API participant Search_Plugin as Search Plugin %% First Turn User->>AI_Assistant: "Get competitive intel on enterprise chat platforms" AI_Assistant->>AI_Assistant: Select Claude_Plugin AI_Assistant->>Claude_Plugin: Send prompt Claude_Plugin->>Anthropic_API: Call Anthropic API Anthropic_API-->>Claude_Plugin: Return competitive summary Claude_Plugin-->>AI_Assistant: Return Claude’s output AI_Assistant-->>User: Deliver competitive intel summary %% Follow-up Turn User->>AI_Assistant: "Now contrast that with our native chat capabilities" AI_Assistant->>Search_Plugin: Retrieve product docs, internal feature specs, etc. Search_Plugin-->>AI_Assistant: Return relevant internal context AI_Assistant->>AI_Assistant: Combine Claude’s competitive intel with org context AI_Assistant-->>User: Share analysis ``` For the easiest implementation, we recommend the following high-level approach. 1. **Choose an invocation phrase for your LLM.** Here we are using "Hey Claude" ![](https://files.readme.io/d9ece8680a569e9b714e36cc4165c841676988218f67c4967de8efb1d5316c18-CleanShot_2025-10-05_at_07.52.582x.png) 2. **Create two slots.**
Slot Name Data Type Slot Description
query string the query a user has input to you
conversation\_context object ``` **Description** Capture the immediate conversational context by recording the last user message and the last bot response. This object should NEVER be requested from the user; it should be populated automatically based on the conversation history to maintain relevance and continuity for subsequent turns. **Properties** last_user_message (string) Description: NEVER ASK THE USER FOR THIS INFORMATION. This is the literal message of the last RELEVANT message the user sent. It represents the user's direct input that prompted your most recent response. Make it exact, do not summarize. Capturing the user's query or statement verbatim is crucial for understanding the immediate context of the conversation turn. last_bot_message (string) Description: NEVER ASK THE USER FOR THIS INFORMATION. This is the literal message of the last RELEVANT message you sent. Focus more on the content you've replied with and less about whether other plugins have found any knowledge. This is good for maintaining contextual relevance. Make it exact, do not summarize this at all, length is no issue here. Ignore things like "Here are the resolved argument" or progess updates. Only grab final content sent to the user ```
3. Set up an [HTTP action](/agent-studio/actions/http-actions) ``` curl https://api.anthropic.com/v1/messages \ -X POST \ -H 'Content-Type: application/json' \ -H "x-api-key: $ANTHROPIC_API_KEY" \ -H 'anthropic-version: 2023-06-01' \ -d '{ "model": "claude-3-5-sonnet-20241022", "max_tokens": 1024, "messages": [ { "role": "user", "content": "{{user_query}}" } ] }' ``` 4. Configure your [Conversation Process](/agent-studio/conversation-process#/) to use that HTTP action & pass the slots into the API call. ![](https://files.readme.io/168841f3f179eff4f2384eae02a97d5f05961bac53bc3d07c8ab1c56e0f98679-CleanShot_2025-10-05_at_08.05.282x.png) ``` user_query: | $CONCAT([ "'UserInput:'",data.query, "'PreviousBotMessage:'",$TEXT(data.conversation_context) ]) ``` 5. Add a content activity to help the AI assistant select your plugin on subsequent turns ![](https://files.readme.io/b466d94f156552b061ee309690d542fc287431df63e0779c80fd58acdb971e4c-CleanShot_2025-10-05_at_08.07.252x.png)
# Check out our demo!