计算机科学
自编码
分类器(UML)
自然语言理解
话语
人工智能
领域(数学分析)
自然语言
生成对抗网络
对话框
机器学习
自然语言处理
深度学习
数学分析
数学
万维网
作者
Yinhe Zheng,Guanyi Chen,Minlie Huang
标识
DOI:10.1109/taslp.2020.2983593
摘要
Natural Language Understanding (NLU) is a vital component of dialogue systems, and its ability to detect Out-of-Domain (OOD) inputs is critical in practical applications, since the acceptance of the OOD input that is unsupported by the current system may lead to catastrophic failure. However, most existing OOD detection methods rely heavily on manually labeled OOD samples and cannot take full advantage of unlabeled data. This limits the feasibility of these models in practical applications. In this paper, we propose a novel model to generate high-quality pseudo OOD samples that are akin to IN-Domain (IND) input utterances and thereby improves the performance of OOD detection. To this end, an autoencoder is trained to map an input utterance into a latent code. Moreover, the codes of IND and OOD samples are trained to be indistinguishable by utilizing a generative adversarial network. To provide more supervision signals, an auxiliary classifier is introduced to regularize the generated OOD samples to have indistinguishable intent labels. Experiments show that these pseudo OOD samples generated by our model can be used to effectively improve OOD detection in NLU. Besides, we also demonstrate that the effectiveness of these pseudo OOD data can be further improved by efficiently utilizing unlabeled data.
科研通智能强力驱动
Strongly Powered by AbleSci AI