培训(气象学)
扩散
图像(数学)
计算机科学
人工智能
计算机视觉
模式识别(心理学)
地理
物理
热力学
气象学
作者
Yunlong Lin,Ye Tian,Sixiang Chen,Zhenqi Fu,Yingying Wang,Wenhao Chai,Zhaohu Xing,Wenxue Li,Lei Zhu,Xinghao Ding
出处
期刊:Proceedings of the ... AAAI Conference on Artificial Intelligence
[Association for the Advancement of Artificial Intelligence (AAAI)]
日期:2025-04-11
卷期号:39 (5): 5307-5315
被引量:6
标识
DOI:10.1609/aaai.v39i5.32564
摘要
Existing low-light image enhancement (LIE) methods have achieved noteworthy success in solving synthetic distortions, yet they often fall short in practical applications. The limitations arise from two inherent challenges in real-world LIE: 1) the collection of distorted/clean image pairs is often impractical and sometimes even unavailable, and 2) accurately modeling complex degradations presents a non-trivial problem. To overcome them, we propose the Attribute Guidance Diffusion framework (AGLLDiff), a training-free method for effective real-world LIE. Instead of specifically defining the degradation process, AGLLDiff shifts the paradigm and models the desired attributes, such as image exposure, structure and color of normal-light images. These attributes are readily available and impose no assumptions about the degradation process, which guides the diffusion sampling process to a reliable high-quality solution space. Extensive experiments demonstrate that our approach outperforms the current leading unsupervised LIE methods across benchmarks in terms of distortion-based and perceptual-based metrics, and it performs well even in sophisticated wild degradation.
科研通智能强力驱动
Strongly Powered by AbleSci AI