计算机科学
编码器
可控性
素描
一致性(知识库)
帧(网络)
运动(物理)
运动矢量
参考坐标系
序列(生物学)
计算机视觉
人工智能
算法
图像(数学)
操作系统
生物
电信
遗传学
数学
应用数学
作者
Xiang Wang,Hangjie Yuan,Shiwei Zhang,Dayou Chen,Jiuniu Wang,Yingya Zhang,Yujun Shen,Deli Zhao,Jingren Zhou
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:34
标识
DOI:10.48550/arxiv.2306.02018
摘要
The pursuit of controllability as a higher standard of visual content creation has yielded remarkable progress in customizable image synthesis. However, achieving controllable video synthesis remains challenging due to the large variation of temporal dynamics and the requirement of cross-frame temporal consistency. Based on the paradigm of compositional generation, this work presents VideoComposer that allows users to flexibly compose a video with textual conditions, spatial conditions, and more importantly temporal conditions. Specifically, considering the characteristic of video data, we introduce the motion vector from compressed videos as an explicit control signal to provide guidance regarding temporal dynamics. In addition, we develop a Spatio-Temporal Condition encoder (STC-encoder) that serves as a unified interface to effectively incorporate the spatial and temporal relations of sequential inputs, with which the model could make better use of temporal conditions and hence achieve higher inter-frame consistency. Extensive experimental results suggest that VideoComposer is able to control the spatial and temporal patterns simultaneously within a synthesized video in various forms, such as text description, sketch sequence, reference video, or even simply hand-crafted motions. The code and models will be publicly available at https://videocomposer.github.io.
科研通智能强力驱动
Strongly Powered by AbleSci AI