计算机科学
堆积
变压器
一般化
卷积神经网络
人工智能
卷积(计算机科学)
匹配(统计)
深度学习
算法
机器学习
模式识别(心理学)
计算机工程
人工神经网络
电气工程
数学
工程类
统计
物理
数学分析
电压
核磁共振
作者
Zihang Dai,Hanxiao Liu,Quoc V. Le,Mingxing Tan
出处
期刊:Cornell University - arXiv
日期:2021-06-09
被引量:19
摘要
Transformers have attracted increasing interests in computer vision, but they still fall behind state-of-the-art convolutional networks. In this work, we show that while Transformers tend to have larger model capacity, their generalization can be worse than convolutional networks due to the lack of the right inductive bias. To effectively combine the strengths from both architectures, we present CoAtNets(pronounced coat nets), a family of hybrid models built from two key insights:(1) depthwise Convolution and self-Attention can be naturally unified via simple relative attention; (2) vertically stacking convolution layers and attention layers in a principled way is surprisingly effective in improving generalization, capacity and efficiency. Experiments show that our CoAtNets achieve state-of-the-art performance under different resource constraints across various datasets. For example, CoAtNet achieves 86.0% ImageNet top-1 accuracy without extra data, and 89.77% with extra JFT data, outperforming prior arts of both convolutional networks and Transformers. Notably, when pre-trained with 13M images fromImageNet-21K, our CoAtNet achieves 88.56% top-1 accuracy, matching ViT-huge pre-trained with 300M images from JFT while using 23x less data.
科研通智能强力驱动
Strongly Powered by AbleSci AI