增采样
计算机科学
特征(语言学)
块(置换群论)
卷积神经网络
比例(比率)
人工智能
编码(集合论)
模式识别(心理学)
图像(数学)
数学
哲学
集合(抽象数据类型)
程序设计语言
物理
量子力学
语言学
几何学
作者
Juncheng Li,Faming Fang,Jiaqian Li,Kangfu Mei,Guixu Zhang
标识
DOI:10.1109/tcsvt.2020.3027732
摘要
Convolutional neural networks have been proven to be of great benefit for single-image super-resolution (SISR). However, previous works do not make full use of multi-scale features and ignore the inter-scale correlation between different upsampling factors, resulting in sub-optimal performance. Instead of blindly increasing the depth of the network, we are committed to mining image features and learning the inter-scale correlation between different upsampling factors. To achieve this, we propose a Multi-scale Dense Cross Network (MDCN), which achieves great performance with fewer parameters and less execution time. MDCN consists of multi-scale dense cross blocks (MDCBs), hierarchical feature distillation block (HFDB), and dynamic reconstruction block (DRB). Among them, MDCB aims to detect multi-scale features and maximize the use of image features flow at different scales, HFDB focuses on adaptively recalibrate channel-wise feature responses to achieve feature distillation, and DRB attempts to reconstruct SR images with different upsampling factors in a single model. It is worth noting that all these modules can run independently. It means that these modules can be selectively plugged into any CNN model to improve model performance. Extensive experiments show that MDCN achieves competitive results in SISR, especially in the reconstruction task with multiple upsampling factors. The code is provided at https://github.com/MIVRC/MDCN-PyTorch .
科研通智能强力驱动
Strongly Powered by AbleSci AI