可解释性
医学
癌症
肿瘤科
内科学
人工智能
计算机科学
出处
期刊:International Journal for Research in Applied Science and Engineering Technology
[International Journal for Research in Applied Science and Engineering Technology (IJRASET)]
日期:2025-01-21
卷期号:13 (1): 1394-1402
被引量:1
标识
DOI:10.22214/ijraset.2025.66580
摘要
Machine learning (ML) is revolutionizing cancer diagnosis by providing advanced algorithms capable of detecting and classifying tumors with high accuracy. However, these models are often perceived as "black-boxes" due to their lack of transparency and interpretability, which limits their adoption in clinical settings where understanding the reasoning behind a diagnosis is vital for decision-making. In critical fields like oncology, the opacity of ML models undermines trust among medical professionals. This research applies Explainable Artificial Intelligence (XAI) techniques to a hybrid ML model, combining decision trees and XGBoost, for diagnosing cancer using a licensed dataset that differentiates between benign and malignant tumors. Specifically, SHapley Additive exPlanations (SHAP) is used to interpret the model’s predictions by explaining the influence of key features, such as tumor size, texture, and shape, achieving an accuracy of 93.86%. This study demonstrates that SHAP not only improves the interpretability of ML models in cancer diagnostics but also aligns its explanations with clinical knowledge, facilitating their integration into real-world clinical practice without compromising accuracy. Future work will explore larger datasets, more complex models, and real-time SHAP explanations to further enhance the clinical utility of XAI in cancer diagnosis.
科研通智能强力驱动
Strongly Powered by AbleSci AI