计算机科学
人工智能
块(置换群论)
遥感
光学(聚焦)
计算机视觉
图像分辨率
特征(语言学)
卷积(计算机科学)
图像融合
比例(比率)
模式识别(心理学)
图像(数学)
人工神经网络
地理
语言学
哲学
物理
几何学
数学
地图学
光学
作者
Zhang Shu,Qiangqiang Yuan,Jie Li,Jing Sun,Xuguo Zhang
标识
DOI:10.1109/tgrs.2020.2966805
摘要
Remote sensing image super-resolution has always been a major research focus, and many deep-learning-based algorithms have been proposed in recent years. However, since the structure of remote sensing images tends to be much more complex than that of natural images, several difficulties still remain for remote sensing images super-resolution. First, it is difficult to depict the nonlinear mapping between high-resolution (HR) and low-resolution (LR) images of different scenes with the same model. Second, the wide range of scales within the ground objects in remote sensing images makes it difficult for single-scale convolution to effectively extract features of various scales. To address the above-mentioned issues, we propose a multiscale attention network (MSAN) to extract the multilevel features of remote sensing images. The basic component of MSAN is the multiscale activation feature fusion block (MAFB). In addition, a scene-adaptive super-resolution strategy for remote sensing images is employed to more accurately describe the structural characteristics of different scenes. The experiments undertaken on several data sets confirm that the proposed algorithm outperforms the other state-of-the-art algorithms, in both evaluation indices and visual results.
科研通智能强力驱动
Strongly Powered by AbleSci AI