强化学习
计算机科学
进化算法
数学优化
帕累托原理
趋同(经济学)
熵(时间箭头)
多目标优化
最优化问题
人工智能
机器学习
算法
数学
物理
量子力学
经济增长
经济
作者
Peng Liang,Yangtao Chen,Yafeng Sun,Ying Huang,Wei Li
标识
DOI:10.1016/j.eswa.2023.122164
摘要
Many-objective optimization problems (MaOPs) are challenging tasks involving optimizing many conflicting objectives simultaneously. Decomposition-based many-objective evolutionary algorithms have effectively maintained a balance between convergence and diversity in recent years. However, these algorithms face challenges in accurately approximating the complex geometric structure of irregular Pareto fronts (PFs). In this paper, an information entropy-driven evolutionary algorithm based on reinforcement learning (RL-RVEA) for many-objective optimization with irregular Pareto fronts is proposed. The proposed algorithm leverages reinforcement learning to guide the evolution process by interacting with the environment to learn the shape and features of PF, which adaptively adjusts the distribution of reference vectors to cover the PFs structure effectively. Moreover, an information entropy-driven adaptive scalarization approach is designed in this paper to reflect the diversity of nondominated solutions, which facilitates the algorithm to balance multiple competing objectives adaptively and select solutions efficiently while maintaining individual diversity. To verify the effectiveness of the proposed algorithm, the RL-RVEA compared with seven state-of-the-art algorithms on the DTLZ, MaF, and WFG test suites and four real-world MaOPs. The results of the experiments demonstrate that the suggested algorithm provides a novel and practical method for addressing MaOPs with irregular PFs.
科研通智能强力驱动
Strongly Powered by AbleSci AI