深信不疑网络
人工智能
计算机科学
支持向量机
Boosting(机器学习)
人工神经网络
机器学习
深度学习
最大熵原理
反向传播
模式识别(心理学)
作者
Ruhi Sarikaya,Geoffrey E. Hinton,Anoop Deoras
出处
期刊:IEEE/ACM transactions on audio, speech, and language processing
[Institute of Electrical and Electronics Engineers]
日期:2014-04-01
卷期号:22 (4): 778-784
被引量:397
标识
DOI:10.1109/taslp.2014.2303296
摘要
Applications of Deep Belief Nets (DBN) to various problems have been the subject of a number of recent studies ranging from image classification and speech recognition to audio classification. In this study we apply DBNs to a natural language understanding problem. The recent surge of activity in this area was largely spurred by the development of a greedy layer-wise pretraining method that uses an efficient learning algorithm called Contrastive Divergence (CD). CD allows DBNs to learn a multi-layer generative model from unlabeled data and the features discovered by this model are then used to initialize a feed-forward neural network which is fine-tuned with backpropagation. We compare a DBN-initialized neural network to three widely used text classification algorithms: Support Vector Machines (SVM), boosting and Maximum Entropy (MaxEnt). The plain DBN-based model gives a call-routing classification accuracy that is equal to the best of the other models. However, using additional unlabeled data for DBN pre-training and combining DBN-based learned features with the original features provides significant gains over SVMs, which, in turn, performed better than both MaxEnt and Boosting.
科研通智能强力驱动
Strongly Powered by AbleSci AI