计算机科学
生成语法
生成模型
水准点(测量)
扩散
领域(数学)
过程(计算)
质量(理念)
班级(哲学)
人工智能
扩散过程
数据科学
机器学习
数学
创新扩散
地理
大地测量学
纯数学
哲学
知识管理
物理
认识论
热力学
操作系统
作者
Hanqun Cao,Cheng Tan,Zhangyang Gao,Guangyong Chen,Pheng‐Ann Heng,Stan Z. Li
出处
期刊:Cornell University - arXiv
日期:2022-09-06
标识
DOI:10.48550/arxiv.2209.02646
摘要
Deep generative models are a prominent approach for data generation, and have been used to produce high quality samples in various domains. Diffusion models, an emerging class of deep generative models, have attracted considerable attention owing to their exceptional generative quality. Despite this, they have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space. This survey presents a plethora of advanced techniques aimed at enhancing diffusion models, including sampling acceleration and the design of new diffusion processes. In addition, we delve into strategies for implementing diffusion models in manifold and discrete spaces, maximum likelihood training for diffusion models, and methods for creating bridges between two arbitrary distributions. The innovations we discuss represent the efforts for improving the functionality and efficiency of diffusion models in recent years. To examine the efficacy of existing models, a benchmark of FID score, IS, and NLL is presented in a specific NFE. Furthermore, diffusion models are found to be useful in various domains such as computer vision, audio, sequence modeling, and AI for science. The paper concludes with a summary of this field, along with existing limitations and future directions. Summation of existing well-classified methods is in our Github: https://github.com/chq1155/A-Survey-on-Generative-Diffusion-Model
科研通智能强力驱动
Strongly Powered by AbleSci AI