判别式
计算机科学
生成对抗网络
生成语法
渲染(计算机图形)
人工智能
对抗制
一致性(知识库)
图像(数学)
对象(语法)
集合(抽象数据类型)
模式识别(心理学)
机器学习
程序设计语言
作者
Chaoyue Wang,Chaohui Wang,Chang Xu,Dacheng Tao
标识
DOI:10.24963/ijcai.2017/404
摘要
In this paper, we propose a principled Tag Disentangled Generative Adversarial Networks (TD-GAN) for re-rendering new images for the object of interest from a single image of it by specifying multiple scene properties (such as viewpoint, illumination, expression, etc.). The whole framework consists of a disentangling network, a generative network, a tag mapping net, and a discriminative network, which are trained jointly based on a given set of images that are completely/partially tagged (i.e., supervised/semi-supervised setting). Given an input image, the disentangling network extracts disentangled and interpretable representations, which are then used to generate images by the generative network. In order to boost the quality of disentangled representations, the tag mapping net is integrated to explore the consistency between the image and its tags. Furthermore, the discriminative network is introduced to implement the adversarial training strategy for generating more realistic images. Experiments on two challenging datasets demonstrate the state-of-the-art performance of the proposed framework in the problem of interest.
科研通智能强力驱动
Strongly Powered by AbleSci AI