强化学习
适应性
透视图(图形)
功能(生物学)
交叉口(航空)
计算机科学
领域(数学)
人机交互
人工智能
数据科学
认知科学
知识管理
心理学
工程类
管理
进化生物学
数学
生物
航空航天工程
经济
纯数学
作者
Timo Kaufmann,Paul Weng,Viktor Bengs,Eyke Hüllermeier
出处
期刊:Cornell University - arXiv
日期:2023-12-25
被引量:32
标识
DOI:10.48550/arxiv.2312.14925
摘要
Reinforcement learning from human feedback (RLHF) is a variant of reinforcement learning (RL) that learns from human feedback instead of relying on an engineered reward function. Building on prior work on the related setting of preference-based reinforcement learning (PbRL), it stands at the intersection of artificial intelligence and human-computer interaction. This positioning offers a promising avenue to enhance the performance and adaptability of intelligent systems while also improving the alignment of their objectives with human values. The training of large language models (LLMs) has impressively demonstrated this potential in recent years, where RLHF played a decisive role in directing the model's capabilities toward human objectives. This article provides a comprehensive overview of the fundamentals of RLHF, exploring the intricate dynamics between RL agents and human input. While recent focus has been on RLHF for LLMs, our survey adopts a broader perspective, examining the diverse applications and wide-ranging impact of the technique. We delve into the core principles that underpin RLHF, shedding light on the symbiotic relationship between algorithms and human feedback, and discuss the main research trends in the field. By synthesizing the current landscape of RLHF research, this article aims to provide researchers as well as practitioners with a comprehensive understanding of this rapidly growing field of research.
科研通智能强力驱动
Strongly Powered by AbleSci AI