图像翻译
对抗制
计算机科学
翻译(生物学)
图像(数学)
人工智能
计算机视觉
生物化学
化学
信使核糖核酸
基因
标识
DOI:10.1145/3638584.3638662
摘要
In this paper, we introduce a novel model that combines the strengths of Pix2pix with a Self-Attention mechanism. Our self-attention module is uniquely designed to not only compute relationships and dependencies as represented in the attention map but also to determine attention values. These values facilitate the bifurcation of input image features between two distinct generators. To enhance the efficacy of this separation, we devised a method to remap attention values. Additionally, our model integrates a paired down-scaling and up-scaling process, which significantly conserves GPU memory. This efficiency makes our model particularly suitable for lightweight devices. Experimental results indicate that our proposed model outperforms the original Pix2pix in image quality, as evidenced by both visual assessments and quantitative scores from semantic segmentation models.
科研通智能强力驱动
Strongly Powered by AbleSci AI