交叉熵
掷骰子
分割
计算机科学
排名(信息检索)
水准点(测量)
人工智能
熵(时间箭头)
二元分类
机器学习
数据挖掘
模式识别(心理学)
数学
统计
支持向量机
物理
大地测量学
量子力学
地理
作者
Quang Du Nguyen,Huu‐Tai Thai
标识
DOI:10.1016/j.engstruct.2023.116988
摘要
Loss functions, which govern a deep learning-based optimization process, have been widely investigated to handle the class imbalanced data issue in crack segmentation. However, their performance varies according to models and datasets, making it challenging to choose the most appropriate ones. To address this issue, the paper conducts a large-scale performance comparison of twelve commonly used loss functions on four benchmark datasets. Various aspects are considered using a statistical test-based ranking scheme, which integrates accuracy, sensitivity to threshold change, and varying degrees of imbalance severity for a comprehensive comparison. The results show that most loss functions achieve relatively similar accuracy on mildly imbalanced datasets, while weighted binary cross-entropy loss, Focal loss, Dice-based loss, and compound loss functions significantly outperform others as imbalance severity increases. In general, Focal Tversky loss function exhibits excellent performance in handling the imbalanced data issue.
科研通智能强力驱动
Strongly Powered by AbleSci AI