计算机科学
生成语法
忠诚
人工智能
理论(学习稳定性)
班级(哲学)
图像(数学)
机器学习
比例(比率)
生成模型
测距
高保真
模式识别(心理学)
电信
物理
量子力学
电气工程
工程类
作者
Yang Song,Stefano Ermon
出处
期刊:Cornell University - arXiv
日期:2020-01-01
被引量:255
标识
DOI:10.48550/arxiv.2006.09011
摘要
Score-based generative models can produce high quality image samples comparable to GANs, without requiring adversarial optimization. However, existing training procedures are limited to images of low resolution (typically below 32x32), and can be unstable under some settings. We provide a new theoretical analysis of learning and sampling from score models in high dimensional spaces, explaining existing failure modes and motivating new solutions that generalize across datasets. To enhance stability, we also propose to maintain an exponential moving average of model weights. With these improvements, we can effortlessly scale score-based generative models to images with unprecedented resolutions ranging from 64x64 to 256x256. Our score-based models can generate high-fidelity samples that rival best-in-class GANs on various image datasets, including CelebA, FFHQ, and multiple LSUN categories.
科研通智能强力驱动
Strongly Powered by AbleSci AI