归纳偏置
过度拟合
变压器
计算机科学
人工智能
先验概率
图像分辨率
机器学习
模式识别(心理学)
失败
多任务学习
贝叶斯概率
人工神经网络
任务(项目管理)
工程类
电压
并行计算
系统工程
电气工程
作者
Yuxuan Zhou,Wangmeng Xiang,Chao Li,Biao Wang,Xihan Wei,Lei Zhang,Margret Keuper,Xian‐Sheng Hua
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:2
标识
DOI:10.48550/arxiv.2206.07662
摘要
Recently, transformers have shown great potential in image classification and established state-of-the-art results on the ImageNet benchmark. However, compared to CNNs, transformers converge slowly and are prone to overfitting in low-data regimes due to the lack of spatial inductive biases. Such spatial inductive biases can be especially beneficial since the 2D structure of an input image is not well preserved in transformers. In this work, we present Spatial Prior-enhanced Self-Attention (SP-SA), a novel variant of vanilla Self-Attention (SA) tailored for vision transformers. Spatial Priors (SPs) are our proposed family of inductive biases that highlight certain groups of spatial relations. Unlike convolutional inductive biases, which are forced to focus exclusively on hard-coded local regions, our proposed SPs are learned by the model itself and take a variety of spatial relations into account. Specifically, the attention score is calculated with emphasis on certain kinds of spatial relations at each head, and such learned spatial foci can be complementary to each other. Based on SP-SA we propose the SP-ViT family, which consistently outperforms other ViT models with similar GFlops or parameters. Our largest model SP-ViT-L achieves a record-breaking 86.3% Top-1 accuracy with a reduction in the number of parameters by almost 50% compared to previous state-of-the-art model (150M for SP-ViT-L vs 271M for CaiT-M-36) among all ImageNet-1K models trained on 224x224 and fine-tuned on 384x384 resolution w/o extra data.
科研通智能强力驱动
Strongly Powered by AbleSci AI