强化学习
计算机科学
最大化
水准点(测量)
人工智能
启发式
数学优化
进化算法
人口
机器学习
数学
地理
社会学
大地测量学
人口学
作者
Lijia Ma,Zengyang Shao,Xiaocong Li,Qiuzhen Lin,Jianqiang Li,Victor C. M. Leung,Asoke K. Nandi
标识
DOI:10.1109/tetci.2021.3136643
摘要
Influence maximization (IM) in complex networks tries to activate a small subset of seed nodes that could maximize the propagation of influence. The studies on IM have attracted much attention due to their wide applications such as item recommendation, viral marketing, information propagation and disease immunization. Existing works mainly model the IM problem as a discrete optimization problem, and use either approximate or meta-heuristic algorithms to address this problem. However, these works are hard to find a good tradeoff between effectiveness and efficiency due to the NP-hard and large-scale network properties of the IM problem. In this article, we propose an evolutionary deep reinforcement learning algorithm (called EDRL-IM) for IM in complex networks. First, EDRL-IM models the IM problem as a continuous weight parameter optimization of deep Q network (DQN). Then, it combines an evolutionary algorithm (EA) and a deep reinforcement learning algorithm (DRL) to evolve the DQN. The EA simultaneously evolves a population of individuals, and each of which represents a possible DQN and returns a solution to the IM problem through a dynamic markov node selection strategy, while the DRL integrates all information and network-specific knowledge of DQNs to accelerate their evolution. Systematic experiments on both benchmark and real-world networks show the superiority of EDRL-IM over the state-of-the-art IM methods in finding seed nodes.
科研通智能强力驱动
Strongly Powered by AbleSci AI