图像复原
降级(电信)
计算机科学
人工智能
计算机视觉
图像处理
图像(数学)
模式识别(心理学)
电信
作者
Xu Zhang,Jiaqi Ma,Guoli Wang,Qian Zhang,Huan Zhang,Lefei Zhang
标识
DOI:10.1109/tip.2025.3566300
摘要
Existing All-in-One image restoration methods often fail to simultaneously perceive degradation types and severity levels, overlooking the importance of fine-grained quality perception. Moreover, these methods often utilize highly customized backbones, which hinder their adaptability and integration into more advanced restoration networks. To address these limitations, we propose Perceive-IR, a novel backbone-agnostic All-in-One image restoration framework designed for fine-grained quality control across various degradation types and severity levels. Its modular structure allows core components to function independently of specific backbones, enabling seamless integration into advanced restoration models without significant modifications. Specifically, Perceive-IR operates in two key stages: (1) multi-level quality-driven prompt learning stage, where a fine-grained quality perceiver is meticulously trained to discern threetier quality levels by optimizing the alignment between prompts and images within the CLIP perception space. This stage ensures a nuanced understanding of image quality, laying the groundwork for subsequent restoration; (2) restoration stage, where the quality perceiver is seamlessly integrated with a difficulty-adaptive perceptual loss, forming a quality-aware learning strategy. This strategy not only dynamically differentiates sample learning difficulty but also achieves fine-grained quality control by driving the restored image toward the ground truth while simultaneously pulling it away from both low- and medium-quality samples. Furthermore, Perceive-IR incorporates a Semantic Guidance Module (SGM) and Compact Feature Extraction (CFE). The SGM leverages semantic information from pre-trained vision models to provide high-level contextual guidance, while the CFE focuses on extracting degradation-specific features, ensuring accurate handling of diverse image degradations. Extensive experiments demonstrate that Perceive-IR not only surpasses state-of-the-art methods but also generalizes reliably to zero-shot realworld and unknown degraded scenes, while adapting seamlessly to different backbone networks. This versatility underscores the framework's robustness and backbone-agnostic design.
科研通智能强力驱动
Strongly Powered by AbleSci AI