计算机科学
一致性(知识库)
过程(计算)
图像(数学)
滤波器(信号处理)
人工智能
集合(抽象数据类型)
图像质量
计算机视觉
深度学习
降噪
任务(项目管理)
还原(数学)
经济
操作系统
管理
程序设计语言
数学
几何学
作者
Yuezhou Li,Yuzhen Niu,Rui Xu,Yuzhong Chen
标识
DOI:10.1016/j.engappai.2023.106611
摘要
The current strive toward efficient intelligent visual systems suffers from challenges in the task of low-light image enhancement. To improve image perception, the low-light scenes in different illumination conditions must be properly focused. However, typical CNN-based methods use the same set of parameters for all images, which limits the capability for handling complex scenes. Meanwhile, the existing deep models integrate the low-level and high-level features by simply adding or concatenating operations, lacking unique designs for the low-light image enhancement task. To address the above challenges, we propose a zero-referenced adaptive filter network (ZAFN) for low-light image enhancement. Specifically, the adaptive filters are generated by the integration of high-level contents from multiple partial scenes. The iterative enlightening process is then conducted using the low-level features that are dynamically modulated with the adaptive filters. To alleviate the requirement of paired training data and enable zero-referenced learning, we propose a color enhancement loss, a global consistency loss, and a self-regularized denoising loss for high-quality results. Our ZAFN model, which has a light model size and low computational cost, outperforms other state-of-the-art zero-referenced methods on four popular datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI