Context Adaptive Network for Image Inpainting

修补 计算机科学 人工智能 背景(考古学) 核(代数) 卷积(计算机科学) 模式识别(心理学) 块(置换群论) 特征(语言学) 卷积神经网络 机器学习 图像(数学) 人工神经网络 计算机视觉 数学 几何学 古生物学 哲学 组合数学 生物 语言学
作者
Ye Deng,S. Hui,Sanping Zhou,Wenli Huang,Jinjun Wang
出处
期刊:IEEE transactions on image processing [Institute of Electrical and Electronics Engineers]
卷期号:32: 6332-6345 被引量:23
标识
DOI:10.1109/tip.2023.3298560
摘要

In a typical image inpainting task, the location and shape of the damaged or masked area is often random and irregular. The vanilla convolutions widely used in learning-based inpainting models treat all spatial features as valid and share parameters across regions, making it difficult for them to cope with those irregular damages, and models tend to produce inpainting results with color discrepancy and blurriness. In this paper, we propose a novel Context Adaptive Network (CANet) to address this issue. The main idea of the proposed CANet is able to generate different weights depending on the miscellaneous input, which may help to complement images with multiple broken forms in a flexible way. Specifically, the proposed CANet has two novel context adaptive modules, namely, the context adaptive block (CAB) and the cross-scale contextual attention (CSCA), which utilize attention mechanisms to cope with diverse content breakdowns. The proposed CAB, during the forward propagation, uses an adaptive term to determine the importance between adaptive term and convolution kernel, so as to dynamically balance features based on the degree of breakage (confidence level or soft mask), and the overall calculation is formulated as a classic convolution implementation with an additional attention term to describe local structure. Besides, the proposed CSCA, not only takes advantage of the contextual attention module, but also considers cross-scale information transfer to generate reasonable features for damaged areas, thus alleviating the inefficiency of the long-range modeling capability of convolutional neural networks. Qualitative and quantitative experiments show that our method performs better than state-of-the-arts, producing clearer, more coherent and visually plausible inpainting results. The code can be found at github.com/dengyecode/CANet_image_inpainting.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
3秒前
13ejgjfdd发布了新的文献求助10
5秒前
5秒前
Lucas应助小太阳采纳,获得10
5秒前
5秒前
Owen应助李绿真采纳,获得10
6秒前
6秒前
追寻的若完成签到,获得积分10
7秒前
无风之旅完成签到,获得积分10
7秒前
vgdog发布了新的文献求助10
8秒前
yqm发布了新的文献求助10
9秒前
学无止境发布了新的文献求助10
10秒前
张亮完成签到,获得积分10
10秒前
Lucas应助123采纳,获得10
10秒前
Xzx1995发布了新的文献求助10
11秒前
11秒前
11秒前
CipherSage应助manqingqian采纳,获得10
12秒前
13秒前
冥想的米其林完成签到,获得积分10
13秒前
wxyshare应助cslghe采纳,获得10
15秒前
16秒前
超级野狼完成签到,获得积分10
16秒前
17秒前
17秒前
可爱的函函应助yqm采纳,获得10
19秒前
wuliumu完成签到,获得积分10
20秒前
芋头粽子发布了新的文献求助10
20秒前
高风亮节发布了新的文献求助10
21秒前
钰天心应助西瓜刀采纳,获得10
22秒前
量子星尘发布了新的文献求助10
22秒前
aa发布了新的文献求助10
22秒前
子车茗应助小赵采纳,获得30
22秒前
123发布了新的文献求助10
22秒前
yxf发布了新的文献求助10
24秒前
领导范儿应助周周采纳,获得10
24秒前
L.关闭了L.文献求助
25秒前
25秒前
25秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
人脑智能与人工智能 1000
花の香りの秘密―遺伝子情報から機能性まで 800
King Tyrant 720
Silicon in Organic, Organometallic, and Polymer Chemistry 500
Principles of Plasma Discharges and Materials Processing, 3rd Edition 400
El poder y la palabra: prensa y poder político en las dictaduras : el régimen de Franco ante la prensa y el periodismo 400
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5605773
求助须知:如何正确求助?哪些是违规求助? 4690365
关于积分的说明 14863216
捐赠科研通 4702671
什么是DOI,文献DOI怎么找? 2542266
邀请新用户注册赠送积分活动 1507862
关于科研通互助平台的介绍 1472159