计算机科学
注释
自然语言理解
人工智能
范围(计算机科学)
对话系统
机制(生物学)
机器学习
自然语言处理
自然语言
万维网
对话框
程序设计语言
哲学
认识论
作者
Pragaash Ponnusamy,Alireza Roshan Ghias,Yi Yi,Benjamin Yao,Chenlei Guo,Ruhi Sarikaya
出处
期刊:Ai Magazine
[Association for the Advancement of Artificial Intelligence]
日期:2021-12-01
卷期号:42 (4): 43-56
被引量:20
摘要
Abstract Today, most of the large‐scale conversational AI agents such as Alexa, Siri, or Google Assistant are built using manually annotated data to train the different components of the system including automatic speech recognition (ASR), natural language understanding (NLU), and entity resolution (ER). Typically, the accuracy of the machine learning models in these components are improved by manually transcribing and annotating data. As the scope of these systems increase to cover more scenarios and domains, manual annotation to improve the accuracy of these components becomes prohibitively costly and time consuming. In this paper, we propose a system that leverages customer/system interaction feedback signals to automate learning without any manual annotation. Users of these systems tend to modify a previous query in hopes of fixing an error in the previous turn to get the right results. These reformulations, which are often preceded by defective experiences caused by either errors in ASR, NLU, ER, or the application. In some cases, users may not properly formulate their requests (e.g., providing partial title of a song), but gleaning across a wider pool of users and sessions reveals the underlying recurrent patterns. Our proposed self‐learning system automatically detects the errors, generates reformulations, and deploys fixes to the runtime system to correct different types of errors occurring in different components of the system. In particular, we propose leveraging an absorbing Markov Chain model as a collaborative filtering mechanism in a novel attempt to mine these patterns, and coupling it with a guardrail rewrite selection mechanism that reactively evaluates these fixes using feedback friction data. We show that our approach is highly scalable, and able to learn reformulations that reduce Alexa‐user errors by pooling anonymized data across millions of customers. The proposed self‐learning system achieves a win‐loss ratio of 11.8 and effectively reduces the defect rate by more than 30 percent on utterance level reformulations in our production A/B tests. To the best of our knowledge, this is the first self‐learning large‐scale conversational AI system in production.
科研通智能强力驱动
Strongly Powered by AbleSci AI