计算机科学
趋同(经济学)
行驶循环
电动汽车
系列(地层学)
人工神经网络
能源管理
能量(信号处理)
算法
人工智能
功率(物理)
数学
物理
统计
古生物学
经济
生物
量子力学
经济增长
作者
Yuecheng Li,Hongwen He,Jiankun Peng,Jingda Wu
出处
期刊:DEStech Transactions on Environment, Energy and Earth Science
[DEStech Publications]
日期:2019-02-04
卷期号: (iceee)
被引量:30
标识
DOI:10.12783/dteees/iceee2018/27794
摘要
On the basis of previous deep Q-network (DQN) based energy management strategy (EMS), two aspects of efforts are explored to improve it for more efficient and stable training and performances in this paper. First, the architecture of original DQN is changed in order to learn separately from the value of current driving/vehicle states and advantages of EMSs under current states; a duplicated network is adopted for Q-value calculation. Second, prioritized replay is introduced for more efficient data utilization during training of DQN based EMS. Simulation results show that the convergence of improved DQN based EMS is faster and higher reward can be achieved compared with original DQN based EMS. Simulation on China typical urban driving cycle for the series hybrid electric vehicle indicates that the fuel economy performance of improved DQN (6.07L/100km) is 8.4% higher than DP based EMS, exceeding original DQN based EMS (6.24L/100km) by about 3%.
科研通智能强力驱动
Strongly Powered by AbleSci AI