计算机科学
答案集编程
自然语言理解
人工智能
自然语言
解析
自然语言处理
知识表示与推理
答疑
自动推理
语言模型
任务(项目管理)
形式主义(音乐)
集合(抽象数据类型)
程序设计语言
逻辑程序设计
音乐剧
艺术
管理
经济
视觉艺术
作者
Zhun Yang,Adam Ishay,Joohyung Lee
标识
DOI:10.18653/v1/2023.findings-acl.321
摘要
While large language models (LLMs), such as GPT-3, appear to be robust and general, their reasoning ability is not at a level to compete with the best models trained for specific natural language reasoning problems.In this study, we observe that a large language model can serve as a highly effective few-shot semantic parser.It can convert natural language sentences into a logical form that serves as input for answer set programs, a logic-based declarative knowledge representation formalism.The combination results in a robust and general system that can handle multiple question-answering tasks without requiring retraining for each new task.It only needs a few examples to guide the LLM's adaptation to a specific task, along with reusable ASP knowledge modules that can be applied to multiple tasks.We demonstrate that this method achieves state-of-the-art performance on several NLP benchmarks, including bAbI, StepGame, CLUTRR, and gSCAN.Additionally, it successfully tackles robot planning tasks that an LLM alone fails to solve.
科研通智能强力驱动
Strongly Powered by AbleSci AI