计算机科学
规范化(社会学)
安全性令牌
人工智能
一般化
卷积神经网络
特征(语言学)
模式识别(心理学)
变压器
机器学习
理论计算机科学
数学
人类学
物理
数学分析
哲学
社会学
量子力学
语言学
电压
计算机安全
作者
Mehrdad Noori,Milad Cheraghalikhani,Ali Bahri,Gustavo A. Vargas Hakim,David Osowiechi,Ismail Ben Ayed,Christian Desrosiers
标识
DOI:10.1016/j.patcog.2023.110213
摘要
Standard deep learning models such as convolutional neural networks (CNNs) lack the ability of generalizing to domains which have not been seen during training. This problem is mainly due to the common but often wrong assumption of such models that the source and target data come from the same i.i.d. distribution. Recently, Vision Transformers (ViTs) have shown outstanding performance for a broad range of computer vision tasks. However, very few studies have investigated their ability to generalize to new domains. This paper presents a first Token-level Feature Stylization (TFS-ViT) approach for domain generalization, which improves the performance of ViTs to unseen data by synthesizing new domains. Our approach transforms token features by mixing the normalization statistics of images from different domains. We further improve this approach with a novel strategy for attention-aware stylization, which uses the attention maps of class (CLS) tokens to compute and mix normalization statistics of tokens corresponding to different image regions. The proposed method is flexible to the choice of backbone model and can be easily applied to any ViT-based architecture with a negligible increase in computational complexity. Comprehensive experiments show that our approach is able to achieve state-of-the-art performance on five challenging benchmarks for domain generalization, and demonstrate its ability to deal with different types of domain shifts. The implementation is available at this repository.
科研通智能强力驱动
Strongly Powered by AbleSci AI