可解释性
计算机科学
推论
人工智能
高斯分布
主题模型
光学(聚焦)
相关性(法律)
文字嵌入
词(群论)
自然语言处理
混合模型
嵌入
机器学习
数学
物理
法学
几何学
光学
量子力学
政治学
作者
Yi-Kun Tang,Heyan Huang,Xuewen Shi,Xian-Ling Mao
出处
期刊:ACM Transactions on Asian and Low-Resource Language Information Processing
日期:2023-03-25
卷期号:22 (4): 1-18
摘要
Neural variational inference-based topic modeling has gained great success in mining abstract topics from documents. However, these topic models usually mainly focus on optimizing the topic proportions for documents, while the quality and the internal construction of topics are usually neglected. Specifically, these models lack the guarantee that semantically related words are supposed to be assigned to the same topic and are difficult to ensure the interpretability of topics. Moreover, many topical words recur frequently in the top words of different topics, which makes the learned topics semantically redundant and similar, and of little significance for further study. To solve the above problems, we propose a novel neural topic model called Neural Variational Gaussian Mixture Topic Model (NVGMTM). We use Gaussian distribution to depict the semantic relevance between words in the topics. Each topic in NVGMTM is considered as a multivariate Gaussian distribution over words in the word-embedding space. Thus, semantically related words share similar probabilities in each topic, which makes the topics more coherent and interpretable. Experimental results on two public corpora show the proposed model outperforms the state-of-the-art baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI