Moveworks regularly ingests several kinds of items from customers’ systems across the Answers (KB articles and FAQs), Forms, Identity and Access (user information, distribution lists) and Lookups (people and places) Skills. The frequency of these updates varies from several times a day to a few times a week, and is available here.
Specifically, knowledge is processed to create “snippets”, which are smaller sub-articles created for the purpose of more precise matching of relevant knowledge with user queries. This enables your bot to serve up the most relevant part of an article, if the user’s query is best answered with a focused snippet.
In addition, Moveworks uses an Enterprise Graph, which can be understood as a contextual representation of all kinds of entities, including global entities (which have the same meaning for all customers, such as software names and common abbreviations) and customer-specific entities (portals, process names, office locations, and other company-specific terms). During implementation of a new customer, Moveworks mines tickets to detect such terms and adds them to our Enterprise Graph via a manual review, to assist the bot’s understanding of these terms. While the majority of entity candidates are detected and added in this phase, we work closely with customers to continuously identify and add other candidates, improving the bot’s responses over time.
Moveworks is able to use external knowledge as part of our External Answers feature. Customers can specify which topics and sites they would like Moveworks to ingest and present to users alongside internal (Company) knowledge. Please see more information here.
There are a few ways in which external knowledge is handled differently from internal:
- The search ranking aims to present relevant internal knowledge above external knowledge (if it exists). This means that if both internal and external answers are determined to be relevant for the query, then we will first display all internal articles, followed by all external articles available.
- External knowledge is sourced from a variety of sites with highly variable formatting. As part of the ingestion, Moveworks uses content extraction techniques to filter out the “scaffolding” on these pages and only ingest the relevant content.
- External answers are only served if the user mentions the topic name in their query. As an example, if a customer has requested external knowledge from the Zoom customer support portal to be ingested, the query “how do I change my Zoom background?” will surface this external content, but the query “how do I change my background?” will not. This is done to make sure that there is no ambiguity when external knowledge is presented.
Moveworks analyzes every query and extracts key information such as the language, part of speech, domain, intent and entities. Given your bot has a variety of ways to assist the user with their query, we use this information to identify which Skills are best placed to respond to the user. E.g. is this a request for an automated operation such as resetting a password or provisioning software, which can be handled by the bot with an action, or is this an information-seeking query about an issue or a process, which requires search to provide the relevant knowledge?
Once the relevant Skills are identified, each of them is provided the user’s query and relevant meta data. Every Skill returns a “bid” which represents its confidence in being able to address the user’s query. An “auctioneer” model reviews all the bids and selects the bid that it determines has the highest likelihood of resolving the user’s query. This prediction is based on the Skill’s own confidence, but also considers whether the response will be able to resolve the issue automatically or not. As an example, a query about resetting your MFA will surface the Reset MFA Skill, which supports automating the MFA reset process, above a relevant FAQ on the same, because the former will automatically resolve the issue while the latter will give the user self-service instructions to resolve the issue.
This matching of query to Skill or resolution type is handled automatically by Moveworks once the customer specifies which types of automations they want to enable. There is no need to do any utterance mapping to different resolutions.
Moveworks search uses semantic understanding of the query and the available knowledge to match the most appropriate content to the query. Our search machine learning models prioritize content which is most likely to be helpful for the user’s specific query, but also use signals from entities mentioned by the user to show articles that may be useful, even if they don’t directly answer the query.
An important aspect of search is that if no relevant resources are available, your bot will let the user know that it was not able to find relevant content. This means that while content was likely retrieved or found, it was determined to have low relevance or quality for that query, and hence, it failed to clear the threshold for display to the user. Note that while the threshold is the same across all queries and articles/FAQs, the relevance score itself is calculated for each query-article pair for every query, thereby making sure that all queries and articles are assessed rigorously for relevance.
Knowledge bases are continuously being updated and improved in terms of content quality and topic coverage. Moveworks always stays in sync with the latest state of your knowledge base to serve the latest versions of the content. As explained above, if no relevant articles were found for a query, the bot will choose to not display low-quality results.
Moveworks search models are a common foundation shared across all customers. Moveworks uses a random and anonymized sample of interactions from all customers to regularly retrain and evaluate search performance, ensuring that models continue to learn from a wide variety of usage patterns. This also means that every user and every organization benefits from improvements in search quality, regardless of the size of the user base.
Customers can use the Answers Insights dashboard to view the queries for which an answer was not served, and use that to create a prioritized list of content to be created or updated. We recommend an interactive process of identifying the most common types of questions that are not receiving an answer, and analyzing whether there is a knowledge gap. This dashboard gives you visibility into the overall performance of your knowledge and enables you to get the most return on your investment in knowledge operations. Additionally, your Moveworks Customer Success team will partner with you to create a data-driven, prioritized plan for continuous bot improvement.
When reviewing queries and articles served, you will find that the majority of queries are being answered as expected. In an ML-based solution, there are always a few utterances which may not surface the expected article or may have ranked the article lower. This is typically due to either the query not being specific enough, or due to a low/medium relevance between the query and the article. However, if the majority of queries are working as expected, then it is likely not a high ROI candidate to improve on.
Users may also provide feedback via the bot on the responses served by the bot. This is used as part of our model update process as one of the signals considered for computing relevance. Note that the overall relevance will not be impacted significantly by a few users providing feedback. This is by design, to prevent the overall experience from being dictated by feedback from a few. However, it does get taken into account if the signal is consistent and strong.
Customers may also contact Moveworks Support for assistance with high priority knowledge-related questions.
Updated 6 months ago