强化学习
变压器
计算机科学
可扩展性
建筑
人工智能
自回归模型
机器学习
工程类
电压
数学
电气工程
数据库
艺术
计量经济学
视觉艺术
作者
Lili Chen,Kevin Lü,Aravind Rajeswaran,Kimin Lee,Aditya Grover,Michael Laskin,Pieter Abbeel,Aravind Srinivas,Igor Mordatch
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:342
标识
DOI:10.48550/arxiv.2106.01345
摘要
We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI