修补
计算机科学
人工智能
构造(python库)
计算机视觉
纹理合成
变压器
像素
图像(数学)
模式识别(心理学)
深度学习
图像处理
图像纹理
物理
量子力学
电压
程序设计语言
作者
R. G. Li,Jiangyan Dai,Qibing Qin,Chengduan Wang,Huihui Zhang,Yugen Yi
摘要
Deep learning exhibits powerful capability in image inpainting task, particularly in generating pixel-level details closely with the human visual perception. However, the complex background or larger missing regions make it still encounters the artifacts. Many researchers have investigated that prior information is crucial for guiding the image inpainting. In this paper, we introduce the dual-attention mechanism, including lightweight spatial attention and linearized attention, to construct an end-to-end texture structure-guided image inpainting method. In the first stage, we build the detail inpainting network with the lightweight spatial attention. In this model, the extracted texture and structural features are fused with multi-layers and then the fused detail image is considered as the prior to guide the detail repair of corrupted images. In the second stage, we construct the content completing network by the repaired detail and the linearized Transformer module. This module not only overcomes the limitation of the receptive field size of convolutional kernels that can improve the long-range modelling of features, but also can significantly reduce the computational complexity of the original Transformer. To demonstrate the superior effectiveness of the proposed method, we perform extensive experiments with advanced models on three datasets: CelebA-HQ, Places2, and Paris Street Views. Comparative results manifest that our method achieves excellent image inpainting results that are conform to the human visual system.The code is available at https://github.com/QinLab-WFU/TSGDAM
科研通智能强力驱动
Strongly Powered by AbleSci AI