计算机科学
图形
推理系统
自动推理
知识图
机会主义推理
言语推理
人工智能
理论计算机科学
自然语言处理
基于模型的推理
知识表示与推理
心理学
认知
神经科学
作者
Linhao Luo,Zicheng Zhao,Gong Chen,Gholamreza Haffari,Shirui Pan,Pan, Shirui
出处
期刊:Cornell University - arXiv
日期:2024-10-16
被引量:5
标识
DOI:10.48550/arxiv.2410.13080
摘要
Large language models (LLMs) have demonstrated impressive reasoning abilities, but they still struggle with faithful reasoning due to knowledge gaps and hallucinations. To address these issues, knowledge graphs (KGs) have been utilized to enhance LLM reasoning through their structured knowledge. However, existing KG-enhanced methods, either retrieval-based or agent-based, encounter difficulties in accurately retrieving knowledge and efficiently traversing KGs at scale. In this work, we introduce graph-constrained reasoning (GCR), a novel framework that bridges structured knowledge in KGs with unstructured reasoning in LLMs. To eliminate hallucinations, GCR ensures faithful KG-grounded reasoning by integrating KG structure into the LLM decoding process through KG-Trie, a trie-based index that encodes KG reasoning paths. KG-Trie constrains the decoding process, allowing LLMs to directly reason on graphs and generate faithful reasoning paths grounded in KGs. Additionally, GCR leverages a lightweight KG-specialized LLM for graph-constrained reasoning alongside a powerful general LLM for inductive reasoning over multiple reasoning paths, resulting in accurate reasoning with zero reasoning hallucination. Extensive experiments on several KGQA benchmarks demonstrate that GCR achieves state-of-the-art performance and exhibits strong zero-shot generalizability to unseen KGs without additional training.
科研通智能强力驱动
Strongly Powered by AbleSci AI