计算机科学
火车
强化学习
能源管理
能源消耗
动态规划
数学优化
电池(电)
燃料效率
最优控制
功率(物理)
汽车工程
算法
能量(信号处理)
工程类
人工智能
电气工程
数学
物理
统计
量子力学
地理
地图学
作者
Qi Li,Xiang Meng,Fei Gao,Guorui Zhang,Weirong Chen
标识
DOI:10.1109/tie.2021.3113021
摘要
Energy management strategy (EMS) is the key to the performance of fuel cell / battery hybrid system. At present, reinforcement learning (RL) has been introduced into this field and has gradually become the focus of research. However, traditional EMSs only take the energy consumption into consideration when optimizing the operation economy, and ignore the cost caused by power source degradations. It would cause the problem of poor operation economy regarding Total Cost of Ownership (TCO). On the other hand, most studied RL algorithms have the disadvantages of overestimation and improper way of restricting battery SOC, which would lead to relatively poor control performance as well. To solve these problems, this paper establishes a TCO model including energy consumption, equivalent energy consumption and degradation of power sources at first, then adopt the Double Q-learning RL algorithm with state constraint and variable action space to determine the optimal EMS. Finally, using hardware-in-the-loop platform, the feasibility, superiority and generalization of proposed EMS is proved by comparing with the optimal dynamic programming and traditional RL EMS and equivalent consumption minimum strategy (ECMS) under both training and unknown operating conditions. Results prove that the proposed strategy has high global optimality and excellent SOC control ability regardless of training or unknown conditions.
科研通智能强力驱动
Strongly Powered by AbleSci AI