离群值
人工智能
最大化
一般化
模式识别(心理学)
计算机科学
特征选择
异常检测
公制(单位)
特征(语言学)
选型
机器学习
数学
工程类
哲学
数学分析
数学优化
语言学
运营管理
作者
Ke Ren,Haichuan Yang,Yu Zhao,Wu Chen,Mingshan Xue,Hongyu Miao,Shuai Huang,Ji Liu
标识
DOI:10.1109/tnnls.2018.2870666
摘要
The positive-unlabeled (PU) classification is a common scenario in real-world applications such as healthcare, text classification, and bioinformatics, in which we only observe a few samples labeled as "positive" together with a large volume of "unlabeled" samples that may contain both positive and negative samples. Building robust classifiers for the PU problem is very challenging, especially for complex data where the negative samples overwhelm and mislabeled samples or corrupted features exist. To address these three issues, we propose a robust learning framework that unifies area under the curve maximization (a robust metric for biased labels), outlier detection (for excluding wrong labels), and feature selection (for excluding corrupted features). The generalization error bounds are provided for the proposed model that give valuable insight into the theoretical performance of the method and lead to useful practical guidance, e.g., to train a model, we find that the included unlabeled samples are sufficient as long as the sample size is comparable to the number of positive samples in the training process. Empirical comparisons and two real-world applications on surgical site infection (SSI) and EEG seizure detection are also conducted to show the effectiveness of the proposed model.
科研通智能强力驱动
Strongly Powered by AbleSci AI