强化学习
行驶循环
计算机科学
利用
能源管理
动态规划
燃料效率
控制(管理)
电动汽车
汽车工程
能量(信号处理)
工程类
人工智能
算法
计算机安全
功率(物理)
数学
量子力学
统计
物理
作者
Luciano Rolando,Nicola Campanelli,Luigi Tresca,Luca Pulvirenti,Federico Millo
摘要
<div class="section abstract"><div class="htmlview paragraph">In recent years, the urgent need to fully exploit the fuel economy potential of Electrified Vehicles (xEVs) through the optimal design of their Energy Management System (EMS) has led to an increasing interest in Machine Learning (ML) techniques. Among them, Reinforcement Learning (RL) seems to be one of the most promising approaches thanks to its peculiar structure in which an agent learns the optimal control strategy by interacting directly with an environment, making decisions, and receiving feedback in the form of rewards. Therefore, in this study, a new Soft Actor-Critic (SAC) agent, which exploits a stochastic policy, was implemented on a digital twin of a state-of-the-art diesel Plug-in Hybrid Electric Vehicle (PHEV) available on the European market. The SAC agent was trained to enhance the fuel economy of the PHEV while guaranteeing its battery charge sustainability. The proposed control strategy's potential was first assessed on the Worldwide harmonized Light-duty vehicles Test Cycle (WLTC) and benchmarked against a Dynamic Programming (DP) optimization to evaluate the performance of two different rewards. Then, the best-performing agent was tested on two additional driving cycles taken from the Environmental Protection Agency (EPA) regulatory framework: the Federal Test Procedure-75 (FTP75) and the Highway Fuel Economy Test (HFET), representative of urban and highway driving scenarios, respectively. The best-performing SAC model achieved results close to the DP reference on the WLTC, with a limited gap (lower than 9%) in terms of fuel consumption over all the testing cycles.</div></div>
科研通智能强力驱动
Strongly Powered by AbleSci AI