强化学习
计算机科学
特征选择
人工智能
差异进化
特征(语言学)
选择(遗传算法)
算法
钢筋
差速器(机械装置)
学习分类器系统
机器学习
模式识别(心理学)
工程类
哲学
语言学
结构工程
航空航天工程
作者
Xiaobing Yu,Zhenpeng Hu,Wenguan Luo,Yu Xue
标识
DOI:10.1016/j.ins.2024.120185
摘要
Feature Selection (FS) can be used to determine the optimal subset of features from a raw dataset by reducing dimensionality and improving accuracy. In this study, a reinforcement learning-based multi-objective differential evolution algorithm (RLMODE) for FS is proposed, which is modeled as a multi-objective optimization problem. First, a reinforcement learning-based offspring generation strategy is designed. The offspring generation strategy is based on the Q-learning framework, which considers each individual in the population as an agent. The dominance relationship between an agent and its predecessor is used to encode the state. A well-chosen action set containing three typical differential evolution mutation operators is available for each agent. The reward is used to update the exclusive Q-table. Moreover, a novel Pareto front (PF) relearning strategy is devised to allow adequate communication between individuals on the PF. The PF relearning strategy reevaluates the potential value of all individuals on the PF as a whole. This promotes the propagation of excellent solutions and improves PF diversity. The proposed RLMODE initially demonstrates its strength in benchmarks. The excellent results for 17 datasets demonstrate RLMODE's merits in terms of dimensionality reduction and accuracy improvement. Therefore, the proposed RLMODE method is a promising FS technique.
科研通智能强力驱动
Strongly Powered by AbleSci AI