Human-level control through deep reinforcement learning

强化学习 人工智能 计算机科学 多样性(控制论) 控制(管理) 感知 人机交互 深度学习 机器学习 生物 神经科学
作者
Volodymyr Mnih,Koray Kavukcuoglu,David Silver,Andrei A. Rusu,Joel Veness,Marc G. Bellemare,Alex Graves,Martin Riedmiller,Andreas Fidjeland,Georg Ostrovski,Stig Petersen,Charles Beattie,Amir Sadik,Ioannis Antonoglou,Helen King,Dharshan Kumaran,Daan Wierstra,Shane Legg,Demis Hassabis
出处
期刊:Nature [Nature Portfolio]
卷期号:518 (7540): 529-533 被引量:25403
标识
DOI:10.1038/nature14236
摘要

The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
柚C美式完成签到 ,获得积分10
2秒前
阿泽完成签到 ,获得积分10
9秒前
11秒前
钟D摆完成签到 ,获得积分10
16秒前
科研1发布了新的文献求助10
17秒前
研友_08oa3n完成签到 ,获得积分10
18秒前
Ava应助qingran采纳,获得10
23秒前
科研1完成签到,获得积分10
26秒前
笨笨青筠完成签到 ,获得积分10
32秒前
33秒前
太叔夜南完成签到,获得积分10
45秒前
Yang22完成签到,获得积分10
46秒前
优秀的dd完成签到 ,获得积分10
48秒前
龙弟弟完成签到 ,获得积分10
51秒前
yy完成签到 ,获得积分10
52秒前
55秒前
MSYzack发布了新的文献求助10
1分钟前
雪酪芋泥球完成签到 ,获得积分10
1分钟前
1分钟前
mojito完成签到 ,获得积分10
1分钟前
aprilvanilla应助MSYzack采纳,获得10
1分钟前
迷路的芝麻完成签到 ,获得积分10
1分钟前
ghost202完成签到,获得积分10
1分钟前
mictime完成签到,获得积分10
1分钟前
gloval完成签到,获得积分10
1分钟前
momo完成签到,获得积分10
1分钟前
1分钟前
黑眼圈完成签到 ,获得积分10
1分钟前
科研通AI2S应助科研通管家采纳,获得10
1分钟前
cdercder应助科研通管家采纳,获得10
1分钟前
isedu完成签到,获得积分10
1分钟前
Zhang完成签到 ,获得积分10
1分钟前
闪闪的乐松完成签到 ,获得积分10
1分钟前
uniquedl完成签到 ,获得积分10
1分钟前
1分钟前
1分钟前
zyw完成签到 ,获得积分10
1分钟前
Ane.Z发布了新的文献求助10
2分钟前
eternal_dreams完成签到 ,获得积分10
2分钟前
鱼圆杂铺完成签到,获得积分10
2分钟前
高分求助中
【此为提示信息,请勿应助】请按要求发布求助,避免被关 20000
ISCN 2024 – An International System for Human Cytogenomic Nomenclature (2024) 3000
Continuum Thermodynamics and Material Modelling 2000
Encyclopedia of Geology (2nd Edition) 2000
105th Edition CRC Handbook of Chemistry and Physics 1600
T/CAB 0344-2024 重组人源化胶原蛋白内毒素去除方法 1000
Maneuvering of a Damaged Navy Combatant 650
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3776027
求助须知:如何正确求助?哪些是违规求助? 3321552
关于积分的说明 10206273
捐赠科研通 3036609
什么是DOI,文献DOI怎么找? 1666398
邀请新用户注册赠送积分活动 797395
科研通“疑难数据库(出版商)”最低求助积分说明 757811