Lifecycle of an utterance

Understanding Moveworks Copilot's User Request Response Mechanism

🚧

This covers Moveworks Copilot

Moveworks Copilot is the latest conversational experience

Overview

The Moveworks Copilot uses a combination of natural language understanding, reasoning and the available tools (or plugins) in the customer’s environment to serve the wide variety of user requests.

Here are the key stages in the lifecycle of a Moveworks Copilot request:

Dialogue Analysis

In this stage, the Moveworks Copilot determines fundamental characteristics of the request such as the language of the utterance, domain of the request, user intent, whether the request is a continuation of the previous request, and whether the request is toxic or inappropriate. If the request can be served, the Moveworks Copilot also identifies entities in the request and attempts to link them to known entities in Moveworks’ entity database. The query may also be rewritten or rephrased to incorporate contextual references from previous interactions.

The output of this step is an expanded understanding of the request and how it fits within the ongoing conversation between the user and the Moveworks Copilot.

Planning

With the information from dialogue analysis, the Moveworks Copilot’s reasoning engine, powered by GPT-4o, reviews the previous conversation history and determines what to do next and what to communicate to the user. In this stage, the Moveworks Copilot also incorporates relevant customer-specific business rules and overrides as it decides its next step.

If a step has already been taken, then the Moveworks Copilot also reviews the output of that previous step and decides whether to try a different approach or continue with the current strategy of solving the request.

Plugin Selection

An important part of planning that deserves to be called out separately is plugin selection. Here, the Moveworks Copilot tries to match the user request with the best plugin from the list of all available plugins in the customer’s environment (including custom plugins built using Creator Studio) by using their descriptions and the arguments they take as inputs.

Currently, the Moveworks Copilot tries to select the single most useful plugin for every request at a time.

Plugin Execution

After the plugin is selected, the Moveworks Copilot sends the user’s request to it in the form required by that plugin to function. Plugins execute the request and respond back to the core reasoning engine with either a completed, successful response, or with an error, or a request for more information from the user to complete the execution of the task.

Plan Evaluation

Based on the response from the plugin, the reasoning engine determines what to do next. This stage is essentially the same as the planning step above, if an utterance is currently being processed.

Response Generation

Depending on the plan evaluation, the Moveworks Copilot then produces a response for the user that may have the answer to the question, an update on further processing, or a request for clarification or disambiguation. This step combines two key capabilities of large language models - reasoning and text generation, and gives the output its characteristic natural tone and fluidity.

How does the Moveworks Copilot learn and improve?

The Moveworks Copilot consists of a suite of machine learning models that are used for a variety of tasks such as reasoning and planning (GPT-4o), knowledge search, toxicity detection, language detection, resource translation and many others.

The Copilot is in continuous improvement with regular updates to not just ML models, but also user experience enhancements, architecture and infrastructure improvements.

This improvement happens mainly via two pathways:

  1. Review of subjective feedback: Moveworks annotation teams and ML engineers regularly review masked and anonymized usage data to identify themes for improvement. In addition, user feedback from MS Teams or Slack, customer feedback from support tickets and Community are also used to identify improvements to specific use cases.
  2. Targeted improvements to improve metrics: We closely monitor metrics such as latency, error rates and response rates, and prioritize investments to improve the platform across all sue cases.

Currently, there is no automated self-learning feedback loop in the Moveworks Copilot. Therefore, there are no automatic changes in behavior that occur solely based on providing feedback or from interactions between the user and the copilot. This means that the Copilot does not automatically adapt its behavior based on previous examples of similar interactions. However, we review randomly sampled usage data to identify key patterns and make contnuous improvements for high frequency issues.