卷积神经网络
计算机科学
特征(语言学)
频道(广播)
代表(政治)
图像(数学)
人工智能
模式识别(心理学)
订单(交换)
特征提取
理论计算机科学
数据挖掘
电信
哲学
财务
政治
经济
法学
语言学
政治学
作者
Yue Lv,Tao Dai,Bin Chen,Jian Lü,Shu-Tao Xia,Jingchao Cao
标识
DOI:10.1109/icassp39728.2021.9414892
摘要
Convolutional neural networks (CNNs) have obtained great success in single image super-resolution (SR). More recent works (e.g., RCAN and SAN) have obtained remarkable performance with channel attention based on first- or second-order statistics of features. However, these methods neglect the rich feature statistics higher than second-order, thus hindering the representation ability of CNNs. To address this issue, we propose a higher-order channel attention (HOCA) module to enhance the representation ability of CNNs. In our HOCA module, to capture different types of semantic information, we first compute k-order of feature statistics, followed by channel attention to learn the feature interdependencies. Considering the diversity of input contents, we design a gate mechanism to adaptively select a specific k-order channel attention. Besides, our HOCA module serves as a plug-and-play module and can be easily plugged into existing state-of-art CNN-based SR methods. Extensive experiments on public benchmarks show that our HOCA module effectively improves the performance of various CNN-based SR methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI