计算机科学
人工智能
联营
特征(语言学)
棱锥(几何)
图像复原
特征提取
编码(集合论)
编码器
亮度
模式识别(心理学)
深度学习
计算机视觉
图像(数学)
图像处理
数学
光学
物理
哲学
语言学
几何学
集合(抽象数据类型)
程序设计语言
操作系统
作者
Hai Jiang,Ren Yang,Songchen Han
标识
DOI:10.1016/j.cviu.2024.103952
摘要
Previous coarse-to-fine strategies typically spend equal effort in feature extraction and feature reconstruction, and gradually improve the brightness of images from bottom to top, resulting in computational resources not being well consumed for restoration. In this paper, we propose a new deep framework for Robust and Fast Low-Light Image Enhancement, dubbed RFLLIE. Specifically, we first use a lightweight CNN encoder consisting of a few convolutional layers and pooling layers to form a feature pyramid for restoration. Then, a coarse-to-fine recovery module, which consists of cascaded depth blocks and well-designed spatial attention layers as well as progressive dilation Resblocks, is proposed for feature aggregation and global-to-local restoration. As such, our RFLLIE is formed as a light-head and heavy-tail architecture that focuses more on feature reconstruction rather than extraction. Additionally, we propose a decomposition-guided restoration loss based on the Retinex theory that adopts the “enhancement before decomposition” strategy instead of the commonly used “decomposition before enhancement” to further improve the contrast and suppress noise. Extensive experiments demonstrate that our method outperforms the existing state-of-the-art methods both quantitatively and visually, and achieves a better trade-off between performance and efficiency. Our code will be available at https://github.com/JianghaiSCU/RFLLIE.
科研通智能强力驱动
Strongly Powered by AbleSci AI