可解释性
计算机科学
机器学习
人工智能
相关性(法律)
过程(计算)
试验数据
联合学习
深度学习
考试(生物学)
独立同分布随机变量
数据挖掘
古生物学
统计
数学
随机变量
政治学
法学
生物
程序设计语言
操作系统
作者
M D Zahin Muntaqim,Tangin Amir Smrity
标识
DOI:10.1007/s10278-025-01484-9
摘要
Brain tumor detection from medical images, especially magnetic resonance imaging (MRI) scans, is a critical task in early diagnosis and treatment planning. Traditional machine learning approaches often rely on centralized data, raising concerns about data privacy, security, and the difficulty of obtaining large annotated datasets. Federated learning (FL) has emerged as a promising solution for training models across decentralized devices while maintaining data privacy. However, challenges remain in dealing with non-IID (independent and identically distributed) data, which is common in real-world scenarios. In this research, we used a client-server-based federated learning framework for brain tumor detection using MRI images, leveraging VGG19 as the backbone model. To improve clinical relevance and model interpretability, we have included explainability techniques, particularly Grad-CAM. We trained our model across four clients with non-IID data distribution to simulate real-world conditions. For performance evaluation, we used a centralized test dataset, consisting of 20% of the original data, with the test set used collectively for evaluating model performance after completing federated learning rounds. Using a separate test dataset ensures that all models are evaluated on the same data, making comparisons fair. Since the test dataset is not part of the FL training process, it does not violate the privacy-preserving nature of FL. The experimental results demonstrate that the VGG19 model achieves a high test accuracy of 97.18% (FedAVG), 98.24% (FedProx), and 98.45% (Scaffold) than other state-of-the-art models, showcasing the effectiveness of federated learning in handling distributed and non-IID data. Our findings highlight the potential of federated learning to address privacy concerns in medical image analysis while maintaining high performance even in non-IID settings. This approach provides a promising direction for future research in privacy-preserving AI for healthcare applications.
科研通智能强力驱动
Strongly Powered by AbleSci AI