颜色恒定性
人工智能
变压器
计算机科学
计算机视觉
模式识别(心理学)
图像(数学)
工程类
电压
电气工程
作者
Kui Jiang,Qiong Wang,Zhaoyi An,Zheng Wang,Cong Zhang,Chia‐Wen Lin
标识
DOI:10.1109/tetci.2024.3369321
摘要
Images captured in low-light or underwater environments are often accompanied by significant degradation, which can negatively impact the quality and performance of downstream tasks. While convolutional neural networks (CNNs) and Transformer architectures have made significant progress in computer vision tasks, there are few efforts to harmonize them into a more concise framework for enhancing such images. To this end, this study proposes to aggregate the individual capability of self-attention (SA) and CNNs for accurate perturbation removal while preserving background contents. Based on this, we carry forward a Retinex-based framework, dubbed as Mutual Retinex, where a two-branch structure is designed to characterize the specific knowledge of reflectance and illumination components while removing the perturbation. To maximize its potential, Mutual Retinex is equipped with a new mutual learning mechanism, involving an elaborately designed mutual representation module (MRM). In MRM, the complementary information between reflectance and illumination components are encoded and used to refine each other. Through the complementary learning via the mutual representation, the enhanced results generated by our model exhibit superior color consistency and naturalness. Extensive experiments have shown the significant superiority of our mutual learning based method over thirteen competitors on the low-light task and ten methods on the underwater image enhancement task. In particular, our proposed Mutual Retinex respectively surpasses the state-of-the-art method MIRNet-v2 by 0.90 dB and 2.46 dB in PSNR on the LOL 1000 and FIVEK datasets, while with only 19.8% model parameters.
科研通智能强力驱动
Strongly Powered by AbleSci AI