交叉熵
人工智能
机器学习
熵(时间箭头)
计算机科学
二进制数
二元分类
精确性和召回率
召回
模式识别(心理学)
数学
支持向量机
算术
物理
哲学
量子力学
语言学
作者
Mohammad Reza Rezaei-Dastjerdehei,Amirmohammad Mijani,Emad Fatemizadeh
标识
DOI:10.1109/icbme51989.2020.9319440
摘要
Training a model and network on an imbalanced dataset always has been a challenging problem in the machine learning field that has been discussed by researchers. In fact, available machine learning algorithms are designed moderately imbalanced datasets and mainly do not consider the dataset's imbalanced problem. In the machine learning algorithm, the imbalance problem appears when the number of one class samples are significantly minor than another class. In order to solve the imbalance problem of a dataset, multiple algorithms are proposed in the field of machine learning and especially in deep learning. In this study, we have benefited from weighted binary cross-entropy in the learning process as a loss function instead of ordinary cross-entropy (binary cross-entropy). This model allocates more penalty to minority class samples during the learning process, and it makes that minority class samples are detected more accurately. Finally, we could improve Recall with preserving Accuracy. In fact, results show that using weighted binary cross-entropy recall increases about 10%, and precision does not decrease more than 3% in comparison to binary cross-entropy.
科研通智能强力驱动
Strongly Powered by AbleSci AI