接头(建筑物)
降噪
人工智能
计算机科学
计算机视觉
蒸馏
模式识别(心理学)
工程类
化学
色谱法
建筑工程
作者
Jingyun Liu,Han Zhu,Zhenzhong Chen,Shan Liu
标识
DOI:10.1109/pcs60826.2024.10566401
摘要
Joint demoisaicking and denoising (JDD) serves as an initial step of image signal processing (ISP), whose performance significantly influences the subsequent operations like image processing and compression. Although deep learning-based methods have demonstrated remarkable performance in JDD, they suffer from heavy computational cost and memory occupation, hindering their deployment of resource-constrained devices. To reach a compromise between performance and complexity, we propose a novel knowledge distillation method named Mutual Guidance Distillation (MGD). It aims at promoting the accuracy of lightweight JDD networks (student) by imitating the mutual guidance procedure between color components from a cumbersome network (teacher). The procedure is achieved by computing spatial correlation between representations of red, blue and green components from different layers. The higher the correlation, the greater the influences of each component representation on the restoration of other components. Then the single-branch student network is trained to mimic the correlation of a multi-branch teacher network. Since the multi-branch teacher derives advantages from component mutual guidance and achieves outstanding performance, student networks can be enhanced under the instruction of the teacher. Experimental results on various joint demosaicking and denoising datasets demonstrate that MGD outperforms several state-of-the-art distillation methods quantitatively and qualitatively, and visual results illustrate that MGD effectively mitigates color artifacts, even on hard cases from MIT moiré dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI