残余物
变压器
计算机科学
计算机视觉
人机交互
人工智能
工程类
电气工程
电压
算法
作者
Anxhelo Diko,Danilo Avola,Marco Cascio,Luigi Cinque
出处
期刊:Cornell University - arXiv
日期:2024-02-17
标识
DOI:10.48550/arxiv.2402.11301
摘要
Vision Transformer (ViT) self-attention mechanism is characterized by feature collapse in deeper layers, resulting in the vanishing of low-level visual features. However, such features can be helpful to accurately represent and identify elements within an image and increase the accuracy and robustness of vision-based recognition systems. Following this rationale, we propose a novel residual attention learning method for improving ViT-based architectures, increasing their visual feature diversity and model robustness. In this way, the proposed network can capture and preserve significant low-level features, providing more details about the elements within the scene being analyzed. The effectiveness and robustness of the presented method are evaluated on five image classification benchmarks, including ImageNet1k, CIFAR10, CIFAR100, Oxford Flowers-102, and Oxford-IIIT Pet, achieving improved performances. Additionally, experiments on the COCO2017 dataset show that the devised approach discovers and incorporates semantic and spatial relationships for object detection and instance segmentation when implemented into spatial-aware transformer models.
科研通智能强力驱动
Strongly Powered by AbleSci AI