判别式
水准点(测量)
计算机科学
特征(语言学)
公制(单位)
模式识别(心理学)
班级(哲学)
利用
人工智能
频道(广播)
样品(材料)
机器学习
特征学习
哲学
地理
化学
经济
大地测量学
色谱法
语言学
计算机安全
计算机网络
运营管理
作者
Xilang Huang,Seon Han Choi
标识
DOI:10.1016/j.patcog.2022.109170
摘要
Few-shot learning considers the problem of learning unseen categories given only a few labeled samples. As one of the most popular few-shot learning approaches, Prototypical Networks have received considerable attention owing to their simplicity and efficiency. However, a class prototype is typically obtained by averaging a few labeled samples belonging to the same class, which treats the samples as equally important and is thus prone to learning redundant features. Herein, we propose a self-attention based prototype enhancement network (SAPENet) to obtain a more representative prototype for each class. SAPENet utilizes multi-head self-attention mechanisms to selectively augment discriminative features in each sample feature map, and generates channel attention maps between intra-class sample features to attentively retain informative channel features for that class. The augmented feature maps and attention maps are finally fused to obtain representative class prototypes. Thereafter, a local descriptor-based metric module is employed to fully exploit the channel information of the prototypes by searching k similar local descriptors of the prototype for each local descriptor in the unlabeled samples for classification. We performed experiments on multiple benchmark datasets: miniImageNet, tieredImageNet, and CUB-200-2011. The experimental results on these datasets show that SAPENet achieves a considerable improvement compared to Prototypical Networks and also outperforms related state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI