去模糊
计算机科学
计算机视觉
背景(考古学)
人工智能
图像处理
图像(数学)
图像复原
地质学
古生物学
作者
Yaowei Li,Hang An,Tong Zhang,Xiaoxuan Chen,Bo Jiang,Jinshan Pan
标识
DOI:10.1109/tcsvt.2025.3549853
摘要
Existing CNN-based and Transformer-based methods have demonstrated remarkable performance in low-level visual tasks, including image deblurring. These methods generally capture spatial features only in a single way, such as by stacking blocks of CNNs and Transformers, resulting in inadequate utilization of spatial context. To address this issue, we propose a new feature aggregation scheme for image deblurring, named Omni-Deblurring. The core of our omni-deblurring is the omni-range context block, which enables explicitly aggregating the local-range, regional-range, and global-range features in a compact manner. With this design, it can bring a wider receptive field for modeling the contextual features. Extensive experiments on synthetic and real-world blurry datasets demonstrate the effectiveness of our proposed method in both quantitative and qualitative evaluations. Furthermore, the quality of our deblurring model is evaluated in the task of object detection, and the mean Average Precision (mAP) metric increases by 10% across all classes compared with other deblurring models. Code is available at https://github.com/yaowli468/Omni-Deblurring.
科研通智能强力驱动
Strongly Powered by AbleSci AI