计算机科学
服装
人工智能
发电机(电路理论)
过程(计算)
一致性(知识库)
分割
鉴别器
生成语法
计算机视觉
机器学习
操作系统
物理
历史
功率(物理)
考古
探测器
电信
量子力学
作者
Yu Liu,Wei Chen,Li Liu,Michael S. Lew
标识
DOI:10.1109/tmm.2019.2897897
摘要
Fashion style transfer has attracted significant attention because it both has interesting scientific challenges and it is also important to the fashion industry. This paper focuses on addressing a practical problem in fashion style transfer, person-to-person clothing swapping, which aims to visualize what the person would look like with the target clothes worn on another person instead of dressing them physically. This problem remains challenging due to varying pose deformations between different person images. In contrast to traditional nonparametric methods that blend or warp the target clothes for the reference person, in this paper we propose a multistage deep generative approach named SwapGAN that exploits three generators and one discriminator in a unified framework to fulfill the task end-to-end. The first and second generators are conditioned on a human pose map and a segmentation map, respectively, so that we can simultaneously transfer the pose style and the clothes style. In addition, the third generator is used to preserve the human body shape during the image synthesis process. The discriminator needs to distinguish two fake image pairs from the real image pair. The entire SwapGAN is trained by integrating the adversarial loss and the mask-consistency loss. The experimental results on the DeepFashion dataset demonstrate the improvements of SwapGAN over other existing approaches through both quantitative and qualitative evaluations. Moreover, we conduct ablation studies on SwapGAN and provide a detailed analysis about its effectiveness.
科研通智能强力驱动
Strongly Powered by AbleSci AI