计算机科学
人工智能
循环神经网络
深度学习
词(群论)
短时记忆
人工神经网络
机器学习
模式识别(心理学)
语音识别
数学
几何学
作者
Jie Du,Chi‐Man Vong,C. L. Philip Chen
标识
DOI:10.1109/tcyb.2020.2969705
摘要
High accuracy of text classification can be achieved through simultaneous learning of multiple information, such as sequence information and word importance. In this article, a kind of flat neural networks called the broad learning system (BLS) is employed to derive two novel learning methods for text classification, including recurrent BLS (R-BLS) and long short-term memory (LSTM)-like architecture: gated BLS (G-BLS). The proposed two methods possess three advantages: 1) higher accuracy due to the simultaneous learning of multiple information, even compared to deep LSTM that extracts deeper but single information only; 2) significantly faster training time due to the noniterative learning in BLS, compared to LSTM; and 3) easy integration with other discriminant information for further improvement. The proposed methods have been evaluated over 13 real-world datasets from various types of text classification. From the experimental results, the proposed methods achieve higher accuracies than LSTM while taking significantly less training time on most evaluated datasets, especially when the LSTM is in deep architecture. Compared to R-BLS, G-BLS has an extra forget gate to control the flow of information (similar to LSTM) to further improve the accuracy on text classification so that G-BLS is more effective while R-BLS is more efficient.
科研通智能强力驱动
Strongly Powered by AbleSci AI