计算机科学
粒子群优化
可扩展性
多群优化
比例(比率)
数学优化
元启发式
人工智能
机器学习
数学
量子力学
数据库
物理
作者
Zijia Wang,Qiang Yang,Yuhui Zhang,Shuhong Chen,Yuan‐Gen Wang
标识
DOI:10.1016/j.asoc.2023.110101
摘要
Large-scale optimization problems (LSOPs) have become increasingly significant and challenging in the evolutionary computation (EC) community. This article proposes a superiority combination learning distributed particle swarm optimization (SCLDPSO) for LSOPs. In algorithm design, a master–slave multi-subpopulation distributed model is adopted, which can obtain the full communication and information exchange among different subpopulations, further achieving the diversity enhancement. Moreover, a superiority combination learning (SCL) strategy is proposed, where each worse particle in the poor-performance subpopulation randomly selects two well-performance subpopulations with better particles for learning. In the learning process, each well-performance subpopulation generates a learning particle by merging different dimensions of different particles, which can fully combine the superiorities of all the particles in the current well-performance subpopulation. The worse particle can significantly improve itself by learning these two superiority combination particles from the well-performance subpopulations, leading to a successful search. Experimental results show that SCLDPSO performs better than or at least comparable with other state-of-the-art large-scale optimization algorithms on both CEC2010 and CEC2013 large-scale optimization test suites, including the winner of the competition on large-scale optimization. Besides, the extended experiments with increasing dimensions to 2000 show the scalability of SCLDPSO. At last, an application in large-scale portfolio optimization problems further illustrates the applicability of SCLDPSO.
科研通智能强力驱动
Strongly Powered by AbleSci AI