阶段(地层学)
对偶(语法数字)
图像(数学)
分辨率(逻辑)
计算机科学
人工智能
计算机视觉
地质学
语言学
哲学
古生物学
标识
DOI:10.17559/tv-20241024002089
摘要
In recent years, blind image super-resolution (SR) methods have demonstrated promising performance but remain limited by inaccurate blur kernel evaluation and difficulties in global feature extraction. This paper introduces DHANet, a Dual-Stage Hybrid Attention Network, combining CNN and Transformer-based modules for blind image SR. DHANet includes a blur kernel predictor, a hybrid attention dual-path module (HADM) for enhanced feature extraction, and a feature refinement module (FRM) for reconstructing refined high-resolution images. Experiments on benchmark datasets demonstrate superior performance in terms of quality and efficiency. Specifically, our method improves the average PSNR metric on four benchmark datasets from 34.98 to 35.29 at SR2 compared to the second-best comparison method, and shows varying degrees of improvement in PSNR and SSIM metrics on different datasets at SRx3 and SRx4.
科研通智能强力驱动
Strongly Powered by AbleSci AI