计算机科学
人工智能
深度学习
稳健性(进化)
对抗制
医学影像学
深层神经网络
机器学习
分割
脆弱性(计算)
计算机安全
生物化学
基因
化学
作者
Sara Kaviani,Ki Jin Han,Insoo Sohn
标识
DOI:10.1016/j.eswa.2022.116815
摘要
In recent years, medical images have significantly improved and facilitated diagnosis in versatile tasks including classification of lung diseases, detection of nodules, brain tumor segmentation, and body organs recognition. On the other hand, the superior performance of machine learning (ML) techniques, specifically deep learning networks (DNNs), in various domains has lead to the application of deep learning approaches in medical image classification and segmentation. Due to the security and vital issues involved, healthcare systems are considered quite challenging and their performance accuracy is of great importance. Previous studies have shown lingering doubts about medical DNNs and their vulnerability to adversarial attacks. Although various defense methods have been proposed, there are still concerns about the application of medical deep learning approaches. This is due to some of medical imaging weaknesses, such as lack of sufficient amount of high quality images and labeled data, compared to various high-quality natural image datasets. This paper reviews recently proposed adversarial attack methods to medical imaging DNNs and defense techniques against these attacks. It also discusses different aspects of these methods and provides future directions for improving neural network’s robustness.
科研通智能强力驱动
Strongly Powered by AbleSci AI