计算机科学
答疑
知识库
任务(项目管理)
图形
基础(拓扑)
知识图
人工智能
自然语言处理
理论计算机科学
数学
工程类
系统工程
数学分析
出处
期刊:Electronics
[MDPI AG]
日期:2024-12-20
卷期号:13 (24): 5011-5011
标识
DOI:10.3390/electronics13245011
摘要
In the field of question answering (QA), the methods of large language models (LLMs) cannot learn vertical domain knowledge during the pre-training stage, leading to low accuracy in domain QA. Conversely, knowledge base question answering (KBQA) can combine the knowledge base (KB) that contains domain knowledge with small language models to achieve high accuracy with a low cost. In KBQA, the inference subgraph is composed of entity nodes and their relationships pertinent to the question, with the final answers being derived from the subgraph. However, there are still two critical problems in this field: (i) fixed or decreased scopes of the inference subgraphs over the reasoning process may lead to limited knowledge, restricted in KBQA, and (ii) a lack of alignment between the inference subgraph and the question leads to low accuracy. In this work, we propose a dynamic graph reasoning model with an auxiliary task, the DGRMWAT, which addresses the above challenges through two key innovations, as follows: (i) dynamic graph reasoning, whereby we update the scope of the inference subgraph during each reasoning step to obtain more relevant knowledge and reduce irrelevant knowledge, and (ii) an auxiliary task to enhance the correlation between the inference subgraph and the question by computing the similarities between the inference subgraph and the QA context node. The experiments on two QA benchmark datasets, CommonsenseQA and OpenbookQA, indicate that the DGRMWAT allowed improvements compared to the baseline models and LLMs.
科研通智能强力驱动
Strongly Powered by AbleSci AI