计算机科学
领域(数学分析)
分类
背景(考古学)
光学(聚焦)
扩散
视频编辑
钥匙(锁)
数据科学
视频处理
领域
生成语法
多媒体
人工智能
政治学
数学分析
法学
生物
古生物学
物理
计算机安全
光学
数学
热力学
作者
Zhen Xing,Qijun Feng,Hao Chen,Qi Dai,Han Hu,Hang Xu,Zuxuan Wu,Yu–Gang Jiang
出处
期刊:Cornell University - arXiv
日期:2023-10-16
被引量:5
标识
DOI:10.48550/arxiv.2310.10647
摘要
The recent wave of AI-generated content (AIGC) has witnessed substantial success in computer vision, with the diffusion model playing a crucial role in this achievement. Due to their impressive generative capabilities, diffusion models are gradually superseding methods based on GANs and auto-regressive Transformers, demonstrating exceptional performance not only in image generation and editing, but also in the realm of video-related research. However, existing surveys mainly focus on diffusion models in the context of image generation, with few up-to-date reviews on their application in the video domain. To address this gap, this paper presents a comprehensive review of video diffusion models in the AIGC era. Specifically, we begin with a concise introduction to the fundamentals and evolution of diffusion models. Subsequently, we present an overview of research on diffusion models in the video domain, categorizing the work into three key areas: video generation, video editing, and other video understanding tasks. We conduct a thorough review of the literature in these three key areas, including further categorization and practical contributions in the field. Finally, we discuss the challenges faced by research in this domain and outline potential future developmental trends. A comprehensive list of video diffusion models studied in this survey is available at https://github.com/ChenHsing/Awesome-Video-Diffusion-Models.
科研通智能强力驱动
Strongly Powered by AbleSci AI