计算机科学
相似性(几何)
蒸馏
面子(社会学概念)
人工智能
图像(数学)
余弦相似度
编码(集合论)
面部识别系统
模式识别(心理学)
对象(语法)
低分辨率
机器学习
分辨率(逻辑)
高分辨率
程序设计语言
化学
有机化学
集合(抽象数据类型)
社会学
地质学
遥感
社会科学
作者
Sungho Shin,Joosoon Lee,Junseok Lee,Yeonguk Yu,Kyoobin Lee
出处
期刊:Cornell University - arXiv
日期:2022-09-29
被引量:4
标识
DOI:10.48550/arxiv.2209.14498
摘要
Deep learning has achieved outstanding performance for face recognition benchmarks, but performance reduces significantly for low resolution (LR) images. We propose an attention similarity knowledge distillation approach, which transfers attention maps obtained from a high resolution (HR) network as a teacher into an LR network as a student to boost LR recognition performance. Inspired by humans being able to approximate an object's region from an LR image based on prior knowledge obtained from HR images, we designed the knowledge distillation loss using the cosine similarity to make the student network's attention resemble the teacher network's attention. Experiments on various LR face related benchmarks confirmed the proposed method generally improved recognition performances on LR settings, outperforming state-of-the-art results by simply transferring well-constructed attention maps. The code and pretrained models are publicly available in the https://github.com/gist-ailab/teaching-where-to-look.
科研通智能强力驱动
Strongly Powered by AbleSci AI