对抗制
数字水印
计算机科学
人工智能
深度学习
图像(数学)
计算机视觉
计算机安全
数字图像
模式识别(心理学)
图像处理
作者
Kyriakos D. Apostolidis,George A. Papakostas
出处
期刊:Journal of Imaging
[Multidisciplinary Digital Publishing Institute]
日期:2022-05-30
卷期号:8 (6): 155-155
被引量:17
标识
DOI:10.3390/jimaging8060155
摘要
In the past years, Deep Neural Networks (DNNs) have become popular in many disciplines such as Computer Vision (CV), and the evolution of hardware has helped researchers to develop many powerful Deep Learning (DL) models to deal with several problems. One of the most important challenges in the CV area is Medical Image Analysis. However, adversarial attacks have proven to be an important threat to vision systems by significantly reducing the performance of the models. This paper brings to light a different side of digital watermarking, as a potential black-box adversarial attack. In this context, apart from proposing a new category of adversarial attacks named watermarking attacks, we highlighted a significant problem, as the massive use of watermarks, for security reasons, seems to pose significant risks to vision systems. For this purpose, a moment-based local image watermarking method is implemented on three modalities, Magnetic Resonance Images (MRI), Computed Tomography (CT-scans), and X-ray images. The introduced methodology was tested on three state-of-the art CV models, DenseNet 201, DenseNet169, and MobileNetV2. The results revealed that the proposed attack achieved over 50% degradation of the model's performance in terms of accuracy. Additionally, MobileNetV2 was the most vulnerable model and the modality with the biggest reduction was CT-scans.
科研通智能强力驱动
Strongly Powered by AbleSci AI