过采样
计算机科学
特征(语言学)
模式识别(心理学)
人工智能
数据挖掘
机器学习
计算机网络
哲学
语言学
带宽(计算)
作者
Fei Wang,Ming Zheng,Xiaowen Hu,Hongchao Li,Taochun Wang,Fulong Chen
标识
DOI:10.1016/j.asoc.2024.111774
摘要
Classification performance often deteriorates when machine learning algorithms are trained on imbalanced data. Although oversampling methods have been successfully employed to address imbalanced data, existing approaches have limitations such as information loss, difficulty in parameter selection, and boundary effects when using and calculating nearest neighbors and densities. Therefore, this study introduces a novel oversampling method called Feature Information Aggregation Oversampling (FIAO). FIAO leverages feature information, including feature importance, feature density, and standard deviation, to guide the oversampling process. Initially, the feature information is employed to partition features into suitable intervals for feature generation. Subsequently, features are generated within these intervals. Finally, the generated features are integrated into the minority class data to achieve effective oversampling. The key advantage of FIAO lies in its ability to fully exploit the intrinsic information carried by the features themselves, thus circumventing issues related to parameter selection and boundary effects. To assess its efficacy, extensive experiments were conducted on 12 widely used benchmark datasets, comparing the performance of the proposed method against 10 popular resampling methods across four commonly used classifiers. The experimental results show that the proposed FIAO method shows ideal results in multiple application scenarios and achieves optimal performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI