计算机科学
核(代数)
图像(数学)
计算机工程
面子(社会学概念)
蒸馏
人工智能
机器学习
数学
社会科学
组合数学
社会学
有机化学
化学
作者
Chengxing Xie,Xiaoming Zhang,Linze Li,Haiteng Meng,Tianlin Zhang,Tianrui Li,Xiaole Zhao
标识
DOI:10.1109/cvprw59228.2023.00135
摘要
Efficient and lightweight single-image super-resolution (SISR) has achieved remarkable performance in recent years. One effective approach is the use of large kernel designs, which have been shown to improve the performance of SISR models while reducing their computational requirements. However, current state-of-the-art (SOTA) models still face problems such as high computational costs. To address these issues, we propose the Large Kernel Distillation Network (LKDN) in this paper. Our approach simplifies the model structure and introduces more efficient attention modules to reduce computational costs while also improving performance. Specifically, we employ the reparameterization technique to enhance model performance without adding extra cost. We also introduce a new optimizer from other tasks to SISR, which improves training speed and performance. Our experimental results demonstrate that LKDN outperforms existing lightweight SR methods and achieves SOTA performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI