AI Assistant Overview
Overview
We’re excited to introduce you to the most powerful assistant at work that you’ve ever had - the Moveworks AI Assistant. Assistant provides a ground-breaking conversational experience using the latest advances in Generative AI and Large Language Models (LLMs), designed to enhance your productivity and make getting help simpler!
The Moveworks AI Assistant brings the same amazing conversational experience you’ve witnessed with ChatGPT and Google Gemini to your workplace. Assistant lives in your surface of choice, like Slack or Teams, giving users one central place to go for quick and easy help. Empower your teams to instantly search and take action across your company’s knowledge bases, files, business processes, and system integrations to find the information they need.
Ready to learn more? Watch our introductory video here.
How Assistant works
The three operating principles below guide Assistant's conversational nature.
Engaging experiences: We set out to make every interaction more engaging and intuitive than ever before. This high engagement leads to more questions asked, more tasks completed, and more problems solved.
Faster outcomes: Employees turn to our Assistant when they need to quickly get unstuck and back to work. By using AI to streamline dialogues and reduce steps, we can dramatically accelerate task completion, meaning every interaction should be as frictionless as possible.
Limitless possibilities: Our goal is to remove any artificial constraints on what our Assistant can do so that we can support nearly any use cases across systems and teams. And with generative AI at the core, it will continuously expand its knowledge and skills.
In order to achieve these, it’s important to understand the Assistant's core competencies:
Reasoning stems from Assistant's foundational intelligence, enabling it to tackle complex issues by mimicking human cognition. This allows Assistant to understand the context of inquiries, engage in clarifying dialogues, and strategize solutions across various systems.
Planning is the process of mapping out specific steps to resolve requests, based on the reasoning phase. It ensures a structured and logical progression towards the solution, eliminating guesswork and enhancing efficiency.
Plugins are specialized tools for executing the action plan. Custom-built for specific tasks and system integrations, they automate processes seamlessly without the need for reconfiguration, acting as the operational arms of the Assistant.
Clarifications are crucial for enhancing vague or incomplete requests. Through natural language conversations or system data analysis, the Assistant fills in information gaps, ensuring a comprehensive understanding of the problem.
Summaries are the Assistant's final responses, utilizing generative AI to provide personalized, succinct, and understandable explanations of the actions taken and information gathered.
Citations complement summaries by providing sources for the information used, establishing trust and transparency, and allowing for further exploration if desired.
Going from Classic to Assistant
Assistant offers an entirely new and upgraded conversational experience from that of Classic. You’ll now unlock new abilities to handle complex queries with advanced reasoning and succinct summaries from multiple sources. Before you get started, let’s first touch briefly on what is still the same within your Moveworks experience:
Every use case from Classic remains supported
Some use cases and features may look and feel different, and some others are planned to be added in the coming months, but you will still be able to do everything you could do with Classic.
All of your systems integrations and access controls remain supported
Every system you have integrated remains connected to the Assistant. There is no change to the available content and related permissions.
All of the knowledge, catalog items and resources are still available and accessible.
All of your connected content is still ready for recall and processing.
Moveworks manages and handles your data with the same high standards
There is no change in our commitment to you regarding the security and privacy of your organization’s data.
Curious what that means for you and your organization?
Here’s just a few new things you that Assistant can do that Classic couldn’t:
- Search and summarize answers sourced directly from your connected information sources and platforms (including files!)
- Execute tasks to take place within your integrated systems and plugins
- Automatically choose the plugins that best fulfill employee-prompted tasks or queries
- Execute multiple plugin calls and actions en route to employee query resolution
- Resolve multi-step and multi-action requests within the same message
- Ask users for more clarity when encountering task execution errors
- Ask follow on questions to users when their question or query is not precise enough to return a desirable answer
- Interpret queries and logically reason and generate succinct answer
And just how is Assistant able to do all of this? We’ve talked about its new conversational experience a few times here, so now let’s break that down the differences between the Assistant and the previous version to understand how it’s possible:
- Assistant uses context
In the previous experience, every question or request was a “binary,” one-off request. If you didn’t get the answer you were looking for, you had to start over. The Assistant is now able to use the conversational context intelligently to figure out what you mean. - Assistant answers the question directly
The Assistant now provides a direct, summarized answer to any question you ask. Humans don’t search to find information - we search to answer questions. The Assistant experience is geared towards getting answers easier. - Assistant only uses only curated and approved documents (and cites the sources)
The value of Assistant is only recognized when you are able to trust the responses it's giving to your workforce. That’s why responses all include the linked source citations used to create the dynamically generated responses.
If desired, you can also use selected, approved external knowledge bases to be used for your organization. This uses a Moveworks feature called External Answers, which allows you to select specific knowledge bases for products and systems that you use in your Assistant to answer questions (ex. Microsoft Support site articles) - Assistant provides visibility into its thinking
The Assistant shares its real-time thinking to show you how it has processed your request, and then provides a way to get help from a person. During processing, the reasoning steps appear dynamically in chat, and disappear after the response is generated to keep the chat window uncluttered. However, you can click on the ℹ️ icon at the bottom of every response to open up a reference pane that documents this information for your review - Assistant takes feedback and adapts to what you need
The Assistant makes decisions dynamically on how to respond to requests. It makes its best judgment on what to offer but sometimes that may not be what you need. In these cases, feedback is needed to arrive at the right answer. And by using a combination of context and feedback, Assistant understands what about its previous answers were not helpful, and changes its next response. - Assistant asks clarifying questions to handle vague queries
The Assistant takes its best shot at answering ambiguous queries, but also follows up to indicate that it can have a better answer if the user can be more specific. - Assistant can handle multiple requests in a query
The Assistant is able to handle multiple requests. The Assistant will parse, process, and address the pieces of the user request one at a time (ex. finding a form and performing a couple of lookups in connected systems to respond with relevant information) as part of a comprehensive answer. - Assistant breaks down complex requests to come up with a plan to solve it
The Assistant has an understanding of the tools at its disposal to serve the user’s request, and combines this with a degree of reasoning to create a plan. It searches all available and relevant information, reasons and generates the most helpful answer, and shares answers for resolution or refinement.
What to Remember About Assistant
Remember, the Assistant is a fundamentally different way of knowledge and resolution seeking. It’s not a “binary” yes/no type of service paradigm but one designed to mimic human decision making leveraging the entirety of your available information ecosystem. So remember:
The Assistant's decision-making is more dynamic
Assistant offers a dynamic conversational experience and is therefore not programmed to follow a sequence of steps. Instead, it looks at the user’s request, the prior context, and the available plugins and resources to determine what to do. For many requests, there isn’t necessarily a single way to solve the problem - which means that you can expect greater variation in the ways that the Assistant addresses requests
The Assistant's responses are more dynamic, too
For every request, the Assistant adapts its response to the context, which means that it always takes the available information and summarizes it into a response, dynamically deciding which sources to use and how to best address the user. As a result, you will see greater variation in the response verbiage as well.
The Assistant takes a bit longer to create a response
The generative nature of Assistant means it performs significantly more computationally intensive tasks in order to: gather context, plan and execute an approach, evaluate the response quality, and tailor a summarized response. As such, responses may take longer to come back, but the answers are indexed for comprehension usefulness. We anticipate latency will reduce as technologies continue improving.
The Assistant calls plugins sequentially and provides the first response that it deems helpful
The Assistant is aware of all the available plugins (think of them as specific tools or operations that are available for it to choose from) and it does a first pass to shortlist the ones it believes are mostly likely to be helpful. Then, it creates its plan and runs through that list sequentially, stopping when it receives a response that it estimates to be helpful.
If multiple plugins have useful responses, the Assistant does not return responses with information from all of them
This is a current limitation, a consequence of the above, but we are looking to eliminate it in the near future.
The Assistant leverages the reference pane for all handoff options
Our vision is to keep the chat window as the dedicated place for conversations with the Assistant, and use the reference pane or popup as the hub for supporting non-conversational tasks such as: filing tickets, connecting to live agents, reviewing sources, providing feedback, and more
Assistant Capabilities
Assistant opens up tremendous opportunities to redefine how employees experience service. The below overview shares what Assistant can do for your organization. And remember, we’re not done! We’re still developing new capabilities to enhance the Assistant offering.
Search and summarization: The Assistant searches across knowledge bases, rosters and org charts, forms in your ISTM catalog, and any connected business systems. Then, it summarizes the search result to directly answer the user’s question.
- Instantly search all integrated knowledge & information sources
- Find and look up personnel information from your organization
Citation and reference: Users can click the ℹ️ emoji in any Assistant response to immediately verify the information and see the source.
- Generate comprehensive answers that pull from multiple sources
- Verify a response’s source documentation
Take actions on behalf of users that comply with your org’s business processes: The Assistant can file a ticket, automatically provision software, add users to a distribution list, reset password, and unlock accounts for the users.
- Manage the end-to-end ticketing user experience from chat
- Autonomously grant access to tools, resources, and processes
Fully conversational experience: You can talk to the Assistant like you are talking to an agent. It will remember your conversation context, ask follow-up questions for ambiguous requests, and give you a detailed explanation if something goes wrong.
- Host direct conversations in English on Slack, Microsoft Teams and Google Chat
- Resolve questions, surface information, and maximize knowledge for learning
Grounded content generation: Want to turn your search result into a nicely written summary? Simply ask the Assistant to convert it to an email or message, and you can further modify aspects like length, tone and emphasis to tweak the created content to your preferences. All of this with content that is grounded in your knowledge and resources.
- Craft custom content within chat conversations
- Discover existing information to generate specific messaging
Limitations
Assistant introduces a brand new service paradigm into your organization, and it’s important to remember that generative AI isn’t necessarily a magic bullet. To set the right expectations, we’ve outlined some important limitations to be aware of.
Assistant responses can vary, even when asked the same question
Because the Assistant uses generative AI, the Assistant generates each response in the moment. It takes into account many factors, including each user’s conversation history. It is not pulling pre-written responses. As a result, the Assistant can deliver different responses, even when asked the same question. It’s best to set users up with this expectation. If the Assistant’s response does not give a user exactly what they need, we encourage users to try asking again in a different way.
Assistant responses do not include images
The Assistant currently does not provide responses that include images.
Responses that require a large amount of text may be unsuccessful
Generative large language models have limits on how much text they can process in one pass. As a result, responses that require a large amount of text may be unsuccessful. This can typically be resolved by retrying after a few minutes or by trying a simpler request.
People lookups that include text-heavy responses might not be supported
People lookups that result in more text than the Assistant can currently process are not supported.
The Assistant does not search the internet
Assistant is designed to use your organization’s knowledge and resources to answer user requests, not unverified resources from the open internet.
Asking the Assistant more than 3 questions or requests at a time may not work reliably
For the best results, we recommend asking the Assistant no more than 3 questions or requests at a time. More than that could result in decreased efficacy.
We cannot guarantee that the Assistant will be able to successfully resolve all user requests.
There will be times when the Assistant cannot resolve the request or returns an incorrect response. When this happens, we encourage users to try asking in a different way. That might result in a more useful response.
Feedback provided through the app does not automatically modify the Assistant’s behavior
Users may 👍, 👎 responses and provide free text comments. Feedback is reviewed by the Moveworks product team to learn more about the user experience, and to inform prioritization of roadmap items. However, we cannot guarantee the feedback will result in a change to the Assistant.
If two plugins have competing responses for a user request, the Assistant serves only one in the first pass
Assistant looks at all the available plugins, and based on the user’s request and the prior context, it shortlists and sequences high confidence plugin candidates that best serve the request. Plugin calls happen one by one and Assistant will await the output before determining whether or not the response answers the user’s request and moves to the next plugin.
If the plugin returns a partial response and needs more information, Assistant summarizes the request for more information back to the user.
Users can invoke specific plugins by asking for them, but plugin selection cannot be enforced programmatically ahead of time
Similar to the reasons for the unavailability of support for including specific verbiage, it is currently not possible to hard code the plugin selection as a rule, e.g. “always respond to requests about X with plugin A”. The Assistant’s reasoning for plugin selection can be influenced through prompts but in the interest of enabling it to dynamically identify an approach, we currently do not enforce rules.
People lookups and Creator Studio-built queries work but the Assistant does not produce responses in the same structured format currently
At the moment, the Assistant produces summarized text responses for all lookup use cases, as opposed to the structured card format in the previous version. We aim to provide configurability for such responses in the near future so that customers can choose how to display the response.
The Assistant leans towards serving user requests with the available plugins, and is sometimes not aware about which requests which are not possible to serve
Each plugin has a description, which is used by the plugin to communicate to the Assistant about its capabilities and use cases it can serve. The Assistant uses the plugin descriptions in the process of selecting which plugins to call for various requests. While it is possible to be more descriptive about which use cases are supported by a plugin, it is much more challenging to cover all the possible scenarios which cannot be resolved by a plugin. As a result, if a use case looks similar to the ones supported, the Assistant may invoke a plugin. This may sometimes lead to an unsuccessful attempt to solve the request.
Key Assistant Concepts
To better understand how and why Assistant operates the way it does, we’ve shared some foundational concepts below.
Grounding
Grounding is the process of ensuring that a generative LLM’s response is based on specific content and information. The Assistant employs grounding in every response, and is geared towards generating responses which are based on your organization’s content, information and processes. This is primarily enforced with a technique called retrieval augmented generation (RAG), which involves two steps:
- Retrieval or identification of relevant information: The Moveworks platform is connected to a variety of information sources in your organization, including
- IT and HR knowledge bases
- File systems such as Sharepoint Online and Google Drive
- Catalog items from systems like ServiceNow
- User roster information from Okta or Active Directory
- Distribution list management systems such as Google Groups
- Custom integrations with systems of the customer’s choice using Creator Studio
These integrations are leveraged by the suite of plugins to identify and retrieve the most relevant information to send to the Assistant.
- Response generation with summarization: The Assistant reviews the user’s request and context, and the information returned by the plugin. Based on this, it generates a summarized response including the most relevant details for the user.
Context and Prompts
The Assistant leverages large language models like GPT-4 and GPT-3.5 to create dynamic and customized responses for every user query, providing a rich and delightful conversational experience. The inputs to these response generating models is collectively called “context,” and it includes:
- Data: The user request, any prior requests and responses exchanged between the user and the Assistant, the source content used to generate these responses, and all relevant user attributes
- Instructions: Based on the request and the available plugins or use cases in the customer environment, the LLM receives instructions for how to approach the request (these influence the response but do not hard code the response)
The compiled information is sent to the LLM as text, and is called a “prompt” since it prompts the LLM to respond using the provided information and in certain ways that align with business rules and expected behavior of the use case. All generative large language models have limits on how much text they can process in one pass.
Assistant Reasoning and Planning
The Moveworks Assistant leverages the ability of large language models to take the user’s request and conversation context to create a plan to tackle the request. An LLM does not have a “brain” but is able to generate a plan, and then answer the question - “is this plan likely to address the user’s request?”.
It can also follow its own plan by calling specific plugins to perform tasks and then review the outputs of these tasks to check if the user’s request is fulfilled or not.
Plugins
Plugins are best understood as the available use cases or operations available to the Assistant in the customer’s environment. These plugins form the building blocks of the Assistant’s response, and are specialized for doing specific tasks. As an example, there are several plugins which together serve ticketing related use cases in the Assistant: Display ticket plugin, Resolve ticket plugin, Reopen ticket plugin, Add comment to ticket plugin, and Handoff options plugin (covers ticketing as well as other handoff options such as live agent)
Custom plugins
Custom plugins are created by customers for their organization’s specific needs using Creator Studio, and include queries/lookups, paths and events.
Native plugins
Native plugins are developed and maintained by Moveworks, and are available to all customers.
Action plugins
Native plugins are divided into two categories, based on whether they are associated with a system that can be written into by the Assistant. Action plugins comprise use case groups that involve writing to a system of record:
- All ticketing plugins - connect to the ticketing systems configured
- Distribution list plugins - connect to the system of record used to manage lists/groups and their members
- Software access plugin - connect to the software provisioning system of record to grant access to various applications
- Account access plugins - connect to the system used to manage access and authentication. Examples include reset password, reset MFA and unlock account plugins.
- Approvals plugins - connect to the systems of record for approval records, for approvals that are handled in the Moveworks Assistant
Search plugins
Search plugins mostly read from systems without modifying them:
- Knowledge and file search plugin - connects to all available knowledge bases and file repositories
- Forms plugin - connects to the source of truth for catalog items, e.g. Servicenow for IT, or Salesforce for HR
- People lookups plugin - connects to the roster management system, e.g. Okta or Active Directory
Generative AI-based conversation
A conversational experience where the Assistant uses generative large language models to produce dynamically generated text responses.
Security Guidance
As you prepare for launch it’s helpful to ensure your Security team is fully onboard. Kickstart the conversation by sharing this blog post that breaks down how we secure the Moveworks Assistant. If your team has further questions, reach out to your Moveworks CSM for more resources.
Additional Resources
Still interested in learning more? We’ve pulled together some handy Assistant resources that you can access at any time!
Updated 21 days ago