计算机科学
医疗保健
人工智能
深度学习
数据科学
经济
经济增长
作者
Asmaa AbdulQawy,Elsayed A. Sallam,Amr Elkholy
标识
DOI:10.1109/jbhi.2024.3484951
摘要
The rapid integration of deep learningpowered artificial intelligence systems in diverse applications such as healthcare, credit assessment, employment, and criminal justice has raised concerns about their fairness, particularly in how they handle various demographic groups. This study delves into the existing biases and their ethical implications in deep learning models. It introduces an UnBias approach for assessing bias in different deep neural network architectures and detects instances where bias seeps into the learning process, shifting the model's focus away from the main features. This contributes to the advancement of equitable and trustworthy AI applications in diverse social settings, especially in healthcare. A case study on COVID-19 detection is carried out, involving chest X-ray scan datasets from various publicly accessible repositories and five well-represented and underrepresented gender-based models across four deep-learning architectures: ResNet50V2, DenseNet121, InceptionV3, and Xception.
科研通智能强力驱动
Strongly Powered by AbleSci AI