服装
计算机科学
发电机(电路理论)
人工智能
情态动词
搭配(遥感)
图像(数学)
代表(政治)
情报检索
计算机视觉
自然语言处理
机器学习
功率(物理)
化学
物理
考古
量子力学
政治
政治学
高分子化学
法学
历史
作者
Linlin Liu,Haijun Zhang,Qun Li,Jianghong Ma,Zhao Zhang
摘要
Synthesizing realistic images of fashion items which are compatible with given clothing images, as well as conditioning on multiple modalities, brings novel and exciting applications together with enormous economic potential. In this work, we propose a multi-modal collocation framework based on generative adversarial network (GAN) for synthesizing compatible clothing images. Given an input clothing item that consists of an image and a text description, our model works on synthesizing a clothing image which is compatible with the input clothing, as well as being guided by a given text description from the target domain. Specifically, a generator aims to synthesize realistic and collocated clothing images relying on image- and text-based latent representations learned from the source domain. An auxiliary text representation from the target domain is added for supervising the generation results. In addition, a multi-discriminator framework is carried out to determine compatibility between the generated clothing images and the input clothing images, as well as visual-semantic matching between the generated clothing images and the targeted textual information. Extensive quantitative and qualitative results demonstrate that our model substantially outperforms state-of-the-art methods in terms of authenticity, diversity, and visual-semantic similarity between image and text.
科研通智能强力驱动
Strongly Powered by AbleSci AI