强化学习
计算机科学
稳健性(进化)
时间范围
背景(考古学)
运动规划
人工智能
人工神经网络
数学优化
机器人
数学
生物化学
生物
基因
古生物学
化学
作者
Bo Hu,Lei Jiang,Sunan Zhang,Qiang Wang
出处
期刊:IEEE Transactions on Transportation Electrification
日期:2023-12-25
卷期号:10 (3): 6488-6496
被引量:5
标识
DOI:10.1109/tte.2023.3347278
摘要
Reinforcement learning (RL) has the capability to discover optimal interactions with the surrounding environment, with the advantage that nearly all required computations can be performed offline. Nevertheless, the lack of explainability for RL-based solutions may prevent their large-scale application in industrial autonomous vehicle tasks. Furthermore, the RL method tends to be unsafe and brittle to scenarios not encountered in training. Conversely, the optimization-based method offers a substantial level of explainability, and through the explicit inclusion of safety constraints, it can guarantee the system's safety. In this context, building upon RL framework, a fusion algorithm that combines the advantage of the RL-based scheme and optimization-based scheme is proposed. Specifically, unlike traditional RL-based solutions which directly executes from perception to control using only neural network maps, this work introduces a mechanism of uncertainty-aware interval prediction to compute the set of states that can be reached over the planning time horizon. On this basis, a robust control framework is presented, which guarantees system safety while considering the worst-case performance scenarios. To validate the proposed algorithm, the task of an autonomous vehicle merging on to a highway from an on-ramp is simulated in SUMO. The results show that the proposed motion planning and control method combines the advantages of RL and optimization-based methods and achieves balanced performance in smoothness, computational efficiency, explainability and robustness.
科研通智能强力驱动
Strongly Powered by AbleSci AI