可视化
计算机视觉
计算机科学
人工智能
图像(数学)
计算机图形学(图像)
作者
Zeyu Zhang,Xianfeng Zhao,Yun Cao
标识
DOI:10.1016/j.jvcir.2024.104125
摘要
Contemporary tampering detection models in digital image forensics often rely on established techniques such as SRM and ELA for revealing evidence of manipulation. Despite their widespread use, these methods often yield unsatisfactory visualizations. In order to enhance the visibility of tampering artifacts, we propose an innovative image representation called Tamper Reconstruction Error (TRE). TRE measures the error between an input image and its reconstructed counterpart using a pre-trained mixed generator. We observed that utilizing a model proficient in computational visual tasks to extract reconstruction errors did not clearly reveal tampering traces in manually manipulated images. To emphasize the more pronounced discrepancies in the reconstruction of tampered images, the image representation TRE is fed into two dedicated extractors designed to capture manipulation features in both the frequency and spatial domains. Throughout the learning process, these extractors adaptively express the essential forgery traces back into spatial domain. Furthermore, to validate the importance of the extracted errors in tampering localization, we introduced a localization annotator. This annotator integrates reconstruction errors at different stages during the encoding and decoding of latent features. Experimental results demonstrate that the integration of extracted features significantly improves the performance of tampering localization, outperforming other state-of-the-art localization frameworks.
科研通智能强力驱动
Strongly Powered by AbleSci AI