计算机科学
参考数据
人工智能
特征(语言学)
模式识别(心理学)
比例(比率)
参考模型
编码(集合论)
图像(数学)
图像分辨率
分辨率(逻辑)
数据挖掘
计算机视觉
物理
哲学
软件工程
量子力学
集合(抽象数据类型)
程序设计语言
语言学
作者
Lin Zhang,Xin Li,Dongliang He,Fu Li,Errui Ding,Zhaoxiang Zhang
标识
DOI:10.1109/iccv51070.2023.01206
摘要
It is widely agreed that reference-based super-resolution (RefSR) achieves superior results by referring to similar high quality images, compared to single image super-resolution (SISR). Intuitively, the more references, the better performance. However, previous RefSR methods have all focused on single-reference image training, while multiple reference images are often available in testing or practical applications. The root cause of such training-testing mismatch is the absence of publicly available multi-reference SR training datasets, which greatly hinders research efforts on multi-reference super-resolution. To this end, we construct a large-scale, multi-reference super-resolution dataset, named LMR. It contains 112, 142 groups of 300×300 training images, which is 10× of the existing largest RefSR dataset. The image size is also some times larger. More importantly, each group is equipped with 5 reference images with different similarity levels. Furthermore, we propose a new baseline method for multi-reference super-resolution: MRefSR, including a Multi-Reference Attention Module (MAM) for feature fusion of an arbitrary number of reference images, and a Spatial Aware Filtering Module (SAFM) for the fused feature selection. The proposed MRefSR achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations. Our code and data are available at: https://github.com/wdmwhh/MRefSR.
科研通智能强力驱动
Strongly Powered by AbleSci AI