计算机科学
动漫
变压器
人工智能
计算机视觉
直线(几何图形)
可视化
模式识别(心理学)
数学
电压
物理
几何学
量子力学
作者
Jianxin Lin,Zhao Wei,Yijun Wang
标识
DOI:10.1109/tmm.2024.3358027
摘要
Exemplar-based anime line art colorization of the same character has been a challenging problem in digital art production because of the sparse representation of line images and the significantly different anime appearance between line and color images. Therefore, it is a fundamental problem to find semantic correspondence between two kinds of images. In this paper, we propose a correspondence learning Transformer network for exemplar-based line art colorization, called ArtFormer, which utilizes a Transformer-based architecture to learn both spatial and visual relationships between line art and color images. ArtFormer mainly consists of two parts: correspondence learning and high-quality image generation. In particular, the correspondence learning module is composed of several Transformer blocks, each of which formulates the deep line image features and color images features as queries and keys, and learns the dense correspondence between two image domains. Then, the network synthesizes high-quality images with a newly proposed Spatial Attention Adaptive Normalization (SAAN) that uses warped deep exemplar features to modulate the shallow features for better adaptive normalization parameters generation. Both qualitative and quantitative experiments show that our method achieves the best performance on exemplar-based line art colorization compared with state-of-the-art methods and other baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI