变压器
计算机科学
水准点(测量)
图像分辨率
高分辨率
人工智能
模式识别(心理学)
数据挖掘
电压
工程类
遥感
地理
电气工程
地图学
作者
Yantao Ji,Peilin Jiang,Jingang Shi,Yu Guo,Ruiteng Zhang,Fei Wang
标识
DOI:10.1109/icip46576.2022.9897359
摘要
Super-resolution (SR) reconstruction is a typical ill-posed problem and therefore can be considered as an information-growth process. The regions with dramatic information increase in the stage of extracting depth features often contain more high-frequency details. So giving more attention to these regions will improve the performance of super-resolution reconstruction. Recently, Transformer-based models have shown remarkable performance in SR. However, current Transformer-based models focus on processing for the features of the current layer input and cannot capture the degree of informational growth crossing successive layers. For this reason, we propose an information-growth Swin Transformer network (IGSTN) for single image super-resolution. The IGSTN can adaptively extract information-growth global dependencies to generate spatial attention, and then this spatial attention will be fused with the feature self-attention in the Transformer to produce the final attention, which allows the model to focus more on high-frequency regions and learn more high-frequency details from them. Extensive experimental results on publicly benchmark datasets show the effectiveness of our IGSTN.
科研通智能强力驱动
Strongly Powered by AbleSci AI