燃料电池
能源管理
汽车工程
能量(信号处理)
环境科学
计算机科学
工程类
物理
化学工程
量子力学
标识
DOI:10.1177/09544070251324802
摘要
In recent years, Deep Reinforcement Learning (DRL) has demonstrated significant potential in Energy Management Strategies (EMS) for Fuel Cell Vehicles (FCVs). Among these, the Soft Actor-Critic (SAC) algorithm has garnered widespread attention for its superior performance, yet it exhibits shortcomings in convergence speed, stability, and reward value. Moreover, the effectiveness of SAC in mitigating the degradation of fuel cells and lithium batteries remains limited. Therefore, this paper proposes an improved SAC (I-SAC) algorithm, which incorporates Prioritized Experience Replay (PER) and Self-Adaptive Temperature Control (SATC) to enhance performance and effectively extend the lifespan of fuel cells and lithium batteries. Simulation results show that, compared with EMSs based on Double Deep Q-Network (DDQN) and Deep Deterministic Policy Gradient (DDPG), I-SAC significantly reduces hydrogen consumption under various operating conditions, while reducing the degradation of fuel cells and lithium batteries by up to 10.615% and 34.347%, respectively. This study presents a new efficient and robust EMS solution for FCVs.
科研通智能强力驱动
Strongly Powered by AbleSci AI