可解释性
计算机科学
常识推理
概化理论
适应(眼睛)
人工智能
人机交互
知识管理
风险分析(工程)
心理学
发展心理学
神经科学
医学
作者
Hao Sha,Yao Mu,Yuxuan Jiang,Li Chen,Chenfeng Xu,Ping Luo,Shengbo Eben Li,Masayoshi Tomizuka,Wei Zhan,Mingyu Ding
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:44
标识
DOI:10.48550/arxiv.2310.03026
摘要
Existing learning-based autonomous driving (AD) systems face challenges in comprehending high-level information, generalizing to rare events, and providing interpretability. To address these problems, this work employs Large Language Models (LLMs) as a decision-making component for complex AD scenarios that require human commonsense understanding. We devise cognitive pathways to enable comprehensive reasoning with LLMs, and develop algorithms for translating LLM decisions into actionable driving commands. Through this approach, LLM decisions are seamlessly integrated with low-level controllers by guided parameter matrix adaptation. Extensive experiments demonstrate that our proposed method not only consistently surpasses baseline approaches in single-vehicle tasks, but also helps handle complex driving behaviors even multi-vehicle coordination, thanks to the commonsense reasoning capabilities of LLMs. This paper presents an initial step toward leveraging LLMs as effective decision-makers for intricate AD scenarios in terms of safety, efficiency, generalizability, and interoperability. We aspire for it to serve as inspiration for future research in this field. Project page: https://sites.google.com/view/llm-mpc
科研通智能强力驱动
Strongly Powered by AbleSci AI