正规化(语言学)
去相关
协方差
嵌入
计算机科学
算法
人工智能
数学
统计
作者
Adrien Bardes,Jean Ponce,Yann LeCun
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:268
标识
DOI:10.48550/arxiv.2105.04906
摘要
Recent self-supervised methods for image representation learning are based on maximizing the agreement between embedding vectors from different views of the same image. A trivial solution is obtained when the encoder outputs constant vectors. This collapse problem is often avoided through implicit biases in the learning architecture, that often lack a clear justification or interpretation. In this paper, we introduce VICReg (Variance-Invariance-Covariance Regularization), a method that explicitly avoids the collapse problem with a simple regularization term on the variance of the embeddings along each dimension individually. VICReg combines the variance term with a decorrelation mechanism based on redundancy reduction and covariance regularization, and achieves results on par with the state of the art on several downstream tasks. In addition, we show that incorporating our new variance term into other methods helps stabilize the training and leads to performance improvements.
科研通智能强力驱动
Strongly Powered by AbleSci AI