纹理合成
计算机科学
马尔可夫随机场
人工智能
纹理过滤
纹理图谱
纹理压缩
纹理(宇宙学)
卷积神经网络
图像纹理
模式识别(心理学)
计算机视觉
图像处理
图像分割
分割
图像(数学)
作者
Yanhai Gan,Feng Gao,Junyu Dong,Sheng Chen
标识
DOI:10.1109/tip.2022.3201710
摘要
Existing deep-network based texture synthesis approaches all focus on fine-grained control of texture generation by synthesizing images from exemplars. Since the networks employed by most of these methods are always tied to individual exemplar textures, a large number of individual networks have to be trained when modeling various textures. In this paper, we propose to generate textures directly from coarse-grained control or high-level guidance, such as texture categories, perceptual attributes and semantic descriptions. We fulfill the task by parsing the generation process of a texture into the three-level Bayesian hierarchical model. A coarse-grained signal first determines a distribution over Markov random fields. Then a Markov random field is used to model the distribution of the final output textures. Finally, an output texture is generated from the sampled Markov random field distribution. At the bottom level of the Bayesian hierarchy, the isotropic and ergodic characteristics of the textures favor a construction that consists of a fully convolutional network. The proposed method integrates texture creation and texture synthesis into one pipeline for real-time texture generation, and enables users to readily obtain diverse textures with arbitrary scales from high-level guidance only. Extensive experiments demonstrate that the proposed method is capable of generating plausible textures that are faithful to user-defined control, and achieving impressive texture metamorphosis by interpolation in the learned texture manifold.
科研通智能强力驱动
Strongly Powered by AbleSci AI