计算机科学
人工智能
判别式
边距(机器学习)
计算机视觉
图像(数学)
图像编辑
透明度(行为)
编码器
背景(考古学)
成对比较
深度学习
模式识别(心理学)
机器学习
计算机安全
操作系统
古生物学
生物
作者
Hao Jing,Zhixin Zhang,Shicai Yang,Di Xie,Shiliang Pu
标识
DOI:10.1109/iccv48922.2021.01478
摘要
Nowadays advanced image editing tools and technical skills produce tampered images more realistically, which can easily evade image forensic systems and make authenticity verification of images more difficult. To tackle this challenging problem, we introduce TransForensics, a novel image forgery localization method inspired by Transformers. The two major components in our framework are dense self-attention encoders and dense correction modules. The former is to model global context and all pairwise inter-actions between local patches at different scales, while the latter is used for improving the transparency of the hidden layers and correcting the outputs from different branches. Compared to previous traditional and deep learning methods, TransForensics not only can capture discriminative representations and obtain high-quality mask predictions but is also not limited by tampering types and patch sequence orders. By conducting experiments on main bench-marks, we show that TransForensics outperforms the state-of-the-art methods by a large margin.
科研通智能强力驱动
Strongly Powered by AbleSci AI