分类学(生物学)
钢筋
强化学习
计算机科学
认知科学
自然语言处理
心理学
人工智能
社会心理学
生态学
生物
作者
Yuji Cao,Huan Zhao,Yuheng Cheng,Ting Shu,Yue Chen,Guolong Liu,Gaoqi Liang,J. Leon Zhao,Jinyue Yan,Yun Li
标识
DOI:10.1109/tnnls.2024.3497992
摘要
With extensive pretrained knowledge and high-level general capabilities, large language models (LLMs) emerge as a promising avenue to augment reinforcement learning (RL) in aspects, such as multitask learning, sample efficiency, and high-level task planning. In this survey, we provide a comprehensive review of the existing literature in LLM-enhanced RL and summarize its characteristics compared with conventional RL methods, aiming to clarify the research scope and directions for future studies. Utilizing the classical agent-environment interaction paradigm, we propose a structured taxonomy to systematically categorize LLMs' functionalities in RL, including four roles: information processor, reward designer, decision-maker, and generator. For each role, we summarize the methodologies, analyze the specific RL challenges that are mitigated and provide insights into future directions. Finally, the comparative analysis of each role, potential applications, prospective opportunities, and challenges of the LLM-enhanced RL are discussed. By proposing this taxonomy, we aim to provide a framework for researchers to effectively leverage LLMs in the RL field, potentially accelerating RL applications in complex applications, such as robotics, autonomous driving, and energy systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI