计算机科学
卷积(计算机科学)
扩散
编码(内存)
人工智能
图像(数学)
噪音(视频)
分割
卷积神经网络
集合(抽象数据类型)
模式识别(心理学)
零(语言学)
人工神经网络
图像分割
算法
计算机视觉
物理
热力学
语言学
哲学
程序设计语言
作者
Lvmin Zhang,Anyi Rao,Maneesh Agrawala
标识
DOI:10.1109/iccv51070.2023.00355
摘要
We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, e.g., edges, depth, segmentation, human pose, etc., with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.
科研通智能强力驱动
Strongly Powered by AbleSci AI