计算机科学
人工神经网络
人工智能
绘画
计算机视觉
图像处理
计算机图形学(图像)
可视化
图像(数学)
视觉艺术
艺术
作者
Fachao Zhang,Xiaoman Liang,Hongda Mou,Yuan Qin,Yue Chen,Huihuang Zhao
标识
DOI:10.1117/1.jei.33.6.063037
摘要
We propose a image-to-painting translation method that can generate paintings on a stroke-by-stroke basis. Unlike previous pixel-to-pixel or sequential optimization methods, our method generates a set of physically meaningful stroke parameters, which are closer to the way humans draw. These parameters can be further rendered using a renderer. We add an attention mechanism network to the proposed renderer to improve the quality of the painting images and use smooth L1 loss in the training process of the renderer to make the model converge faster. Our method can join neural style transfer, and we used Visual Geometry Group perceptual loss in the neural style transfer stage to get more realistic results. The experimental results show that the renderer used in our method is better than other renderers, and peak signal-to-noise ratio evaluation metrics are improved by 4.9% compared with previous renderers.
科研通智能强力驱动
Strongly Powered by AbleSci AI