Automated Feature Selection: A Reinforcement Learning Perspective

强化学习 计算机科学 特征学习 人工智能 特征选择 机器学习 自编码 特征(语言学) 维数之咒 Softmax函数 卷积神经网络 人工神经网络 哲学 语言学
作者
Kunpeng Liu,Yanjie Fu,Le Wu,Xiaolin Li,Charų C. Aggarwal,Hui Xiong
出处
期刊:IEEE Transactions on Knowledge and Data Engineering [IEEE Computer Society]
卷期号:: 1-1 被引量:45
标识
DOI:10.1109/tkde.2021.3115477
摘要

Feature selection is a critical step in machine learning that selects the most important features for a subsequent prediction task. Effective feature selection can help to reduce dimensionality, improve prediction accuracy, and increase result comprehensibility. It is traditionally challenging to find the optimal feature subset from the feature subset space as the space could be very large. While much effort has been made on feature selection, reinforcement learning can provide a new perspective towards a more globally-optimal searching strategy. In the preliminary work, we propose a multi-agent reinforcement learning framework for the feature selection problem. Specifically, we first reformulate feature selection with a reinforcement learning framework by regarding each feature as an agent. Besides, we obtain the state of the environment in three ways, i.e., statistic description, autoencoder, and graph convolutional network (GCN), in order to derive a fixed-length state representation as the input of reinforcement learning. In addition, we study how the coordination among feature agents can be improved by a more effective reward scheme. Also, we provide a GMM-based generative rectified sampling strategy to accelerate the convergence of multi-agent reinforcement learning. Our method searches the feature subset space more globally and can be easily adapted to real-time scenarios due to the nature of reinforcement learning. In the extended version, we further accelerate the framework from two aspects. From the sampling aspect, we show the indirect acceleration by proposing a rank-based softmax sampling strategy. From the exploration aspect, we show the direct acceleration by proposing an interactive reinforcement learning (IRL)-based exploration strategy. Extensive experimental results show the significant improvement of the proposed method over conventional approaches.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
我是老大应助科研通管家采纳,获得10
刚刚
科研通AI5应助科研通管家采纳,获得30
刚刚
丘比特应助科研通管家采纳,获得10
1秒前
泽ze应助科研通管家采纳,获得20
1秒前
英俊的铭应助科研通管家采纳,获得10
1秒前
1秒前
1秒前
斯文败类应助科研通管家采纳,获得10
1秒前
我是老大应助科研通管家采纳,获得10
1秒前
1秒前
温柔的耳机完成签到,获得积分10
1秒前
FashionBoy应助科研通管家采纳,获得10
1秒前
脑洞疼应助科研通管家采纳,获得10
2秒前
搜集达人应助科研通管家采纳,获得10
2秒前
Racheal发布了新的文献求助10
2秒前
爆米花应助快乐的凡霜采纳,获得10
2秒前
辛勤如南完成签到,获得积分10
2秒前
4秒前
4秒前
卢丹丹完成签到,获得积分20
5秒前
打打应助梅雨季来信采纳,获得10
8秒前
HZHZHZH发布了新的文献求助10
8秒前
8秒前
镜中男人完成签到,获得积分10
10秒前
眯眯眼的代容完成签到,获得积分10
11秒前
领导范儿应助beiest采纳,获得10
12秒前
14秒前
Racheal完成签到,获得积分20
15秒前
Rae sremer完成签到,获得积分10
15秒前
17秒前
852应助淡墨采纳,获得30
17秒前
18秒前
乐乐应助小太阳采纳,获得10
18秒前
Alias1234完成签到,获得积分10
19秒前
FashionBoy应助NANNAN采纳,获得10
19秒前
犹豫荧发布了新的文献求助10
20秒前
可爱的函函应助xcz采纳,获得10
20秒前
李国民完成签到,获得积分10
21秒前
赘婿应助超帅的从菡采纳,获得10
22秒前
22秒前
高分求助中
Technologies supporting mass customization of apparel: A pilot project 600
Разработка метода ускоренного контроля качества электрохромных устройств 500
Chinesen in Europa – Europäer in China: Journalisten, Spione, Studenten 500
Arthur Ewert: A Life for the Comintern 500
China's Relations With Japan 1945-83: The Role of Liao Chengzhi // Kurt Werner Radtke 500
Two Years in Peking 1965-1966: Book 1: Living and Teaching in Mao's China // Reginald Hunt 500
Epigenetic Drug Discovery 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3818939
求助须知:如何正确求助?哪些是违规求助? 3362015
关于积分的说明 10414983
捐赠科研通 3080315
什么是DOI,文献DOI怎么找? 1694152
邀请新用户注册赠送积分活动 814609
科研通“疑难数据库(出版商)”最低求助积分说明 768337