计算机科学
人工智能
块(置换群论)
特征(语言学)
对象(语法)
编码(集合论)
图像(数学)
过程(计算)
计算机图形学
模式识别(心理学)
计算机视觉
相似性(几何)
绘图
目标检测
传输(计算)
学习迁移
计算机图形学(图像)
哲学
语言学
几何学
数学
集合(抽象数据类型)
并行计算
程序设计语言
操作系统
作者
Fenfen Zhou,Yingjie Tian,Zhiquan Qi
标识
DOI:10.1109/tcsvt.2020.3024213
摘要
Natural image matting is an important problem that widely applied in computer vision and graphics. Recent deep learning matting approaches have made an impressive process in both accuracy and efficiency. However, there are still two fundamental problems remain largely unsolved: 1) accurately separating an object from the image with similar foreground and background color or lots of details; 2) exactly extracting an object with fine structures from complex background. In this paper, we propose an attention transfer network (ATNet) to overcome these challenges. Specifically, we firstly design a feature attention block to effectively distinguish the foreground object from the color-similar regions by activating foreground-related features as well as suppressing others. Then, we introduce a scale transfer block to magnify the feature maps without adding extra information. By integrating the above blocks into an attention transfer module, we effectively reduce the artificial content in results and decrease the computational complexity. Besides, we use a perceptual loss to measure the difference between the feature representations of the predictions and the ground-truths. It can further capture the high-frequency details of the image, and consequently, optimize the fine structures of the object. Extensive experiments on two publicly common datasets (i.e., Composition-1k matting dataset, and www.alphamatting.com dataset) show that the proposed ATNet obtains significant improvements over the previous methods. The source code and compiled models have been made publicly available at https://github.com/ailsaim/ATNet.
科研通智能强力驱动
Strongly Powered by AbleSci AI