PPformer: Using pixel-wise and patch-wise cross-attention for low-light image enhancement

计算机科学 人工智能 像素 模式识别(心理学) 计算机视觉
作者
Jiachen Dang,Yong Zhong,Xiaolin Qin
出处
期刊:Computer Vision and Image Understanding [Elsevier BV]
卷期号:241: 103930-103930 被引量:12
标识
DOI:10.1016/j.cviu.2024.103930
摘要

Recently, transformer-based methods have shown strong competition compared to CNN-based methods on the low-light image enhancement task, by employing the self-attention for feature extraction. Transformer-based methods perform well in modeling long-range pixel dependencies, which are essential for low-light image enhancement to achieve better lighting, natural colors, and higher contrast. However, the high computational cost of self-attention limits its development in low-light image enhancement, while some works struggle to balance accuracy and computational cost. In this work, we propose a lightweight and effective network based on the proposed pixel-wise and patch-wise cross-attention mechanism, PPformer, for low-light image enhancement. PPformer is a CNN-transformer hybrid network that is divided into three parts: local-branch, global-branch, and Dual Cross-Attention. Each part plays a vital role in PPformer. Specifically, the local-branch extracts local structural information using a stack of Wide Enhancement Modules, and the global-branch provides the refining global information by Cross Patch Module and Global Convolution Module. Besides, different from self-attention, we use extracted global semantic information to guide modeling dependencies between local and non-local. According to calculating Dual Cross-Attention, the PPformer can effectively restore images with better color consistency, natural brightness and contrast. Benefiting from the proposed dual cross-attention mechanism, PPformer effectively captures the dependencies in both pixel and patch levels for a full-size feature map. Extensive experiments on eleven real-world benchmark datasets show that PPformer achieves better quantitative and qualitative results than previous state-of-the-art methods.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
王天天完成签到 ,获得积分10
3秒前
轻松笙发布了新的文献求助10
3秒前
摔碎玻璃瓶完成签到,获得积分10
4秒前
jinxuan完成签到,获得积分10
5秒前
5秒前
汉堡包应助gunanshu采纳,获得10
6秒前
7秒前
科研通AI2S应助朱莉采纳,获得10
8秒前
棉花不是花完成签到,获得积分10
11秒前
有终完成签到 ,获得积分10
13秒前
14秒前
科目三应助清风明月采纳,获得10
18秒前
18秒前
白开水完成签到,获得积分10
20秒前
20秒前
静默关注了科研通微信公众号
20秒前
情怀应助肖博文采纳,获得10
21秒前
Hello应助Andy采纳,获得10
22秒前
大成子发布了新的文献求助10
22秒前
奥斯卡发布了新的文献求助10
24秒前
星辰大海应助白开水采纳,获得10
24秒前
清秀的发夹完成签到,获得积分10
24秒前
落后醉易发布了新的文献求助10
25秒前
abc97完成签到,获得积分10
27秒前
顺鑫完成签到 ,获得积分10
28秒前
ldh032应助QR采纳,获得10
30秒前
所所应助QR采纳,获得10
30秒前
ldh032应助候默——辛普森采纳,获得10
30秒前
30秒前
归尘应助珊珊采纳,获得10
33秒前
Andy发布了新的文献求助10
34秒前
35秒前
35秒前
Doc_Ocean完成签到,获得积分10
35秒前
NexusExplorer应助细草微风岸采纳,获得10
37秒前
魏1122完成签到,获得积分10
37秒前
孤月笑清风完成签到,获得积分10
38秒前
QR发布了新的文献求助10
40秒前
静默发布了新的文献求助10
40秒前
40秒前
高分求助中
【此为提示信息,请勿应助】请按要求发布求助,避免被关 20000
Continuum Thermodynamics and Material Modelling 2000
Encyclopedia of Geology (2nd Edition) 2000
105th Edition CRC Handbook of Chemistry and Physics 1600
Maneuvering of a Damaged Navy Combatant 650
Mixing the elements of mass customisation 300
the MD Anderson Surgical Oncology Manual, Seventh Edition 300
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3778058
求助须知:如何正确求助?哪些是违规求助? 3323749
关于积分的说明 10215625
捐赠科研通 3038921
什么是DOI,文献DOI怎么找? 1667711
邀请新用户注册赠送积分活动 798361
科研通“疑难数据库(出版商)”最低求助积分说明 758339