已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

PPformer: Using pixel-wise and patch-wise cross-attention for low-light image enhancement

计算机科学 人工智能 像素 模式识别(心理学) 计算机视觉
作者
Jiachen Dang,Yong Zhong,Xiaolin Qin
出处
期刊:Computer Vision and Image Understanding [Elsevier BV]
卷期号:241: 103930-103930 被引量:17
标识
DOI:10.1016/j.cviu.2024.103930
摘要

Recently, transformer-based methods have shown strong competition compared to CNN-based methods on the low-light image enhancement task, by employing the self-attention for feature extraction. Transformer-based methods perform well in modeling long-range pixel dependencies, which are essential for low-light image enhancement to achieve better lighting, natural colors, and higher contrast. However, the high computational cost of self-attention limits its development in low-light image enhancement, while some works struggle to balance accuracy and computational cost. In this work, we propose a lightweight and effective network based on the proposed pixel-wise and patch-wise cross-attention mechanism, PPformer, for low-light image enhancement. PPformer is a CNN-transformer hybrid network that is divided into three parts: local-branch, global-branch, and Dual Cross-Attention. Each part plays a vital role in PPformer. Specifically, the local-branch extracts local structural information using a stack of Wide Enhancement Modules, and the global-branch provides the refining global information by Cross Patch Module and Global Convolution Module. Besides, different from self-attention, we use extracted global semantic information to guide modeling dependencies between local and non-local. According to calculating Dual Cross-Attention, the PPformer can effectively restore images with better color consistency, natural brightness and contrast. Benefiting from the proposed dual cross-attention mechanism, PPformer effectively captures the dependencies in both pixel and patch levels for a full-size feature map. Extensive experiments on eleven real-world benchmark datasets show that PPformer achieves better quantitative and qualitative results than previous state-of-the-art methods.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
Lucas应助xxx采纳,获得10
3秒前
CipherSage应助读书的时候采纳,获得10
3秒前
TOF发布了新的文献求助10
4秒前
亭2007完成签到 ,获得积分10
5秒前
7秒前
科研狗完成签到 ,获得积分10
7秒前
11秒前
11秒前
拿起蜡笔小新完成签到 ,获得积分10
12秒前
12秒前
郑总完成签到 ,获得积分10
14秒前
16秒前
1點點cui发布了新的文献求助10
16秒前
乐正亦寒完成签到 ,获得积分10
18秒前
充电宝应助沉淀采纳,获得10
18秒前
xxx发布了新的文献求助10
20秒前
w。完成签到 ,获得积分10
24秒前
魁梧的衫完成签到 ,获得积分10
26秒前
黄焖鸡米饭完成签到,获得积分10
27秒前
英姑应助rff采纳,获得10
29秒前
汉库克完成签到,获得积分10
30秒前
xxx完成签到,获得积分10
31秒前
32秒前
mycishere发布了新的文献求助10
34秒前
vchen0621发布了新的文献求助10
35秒前
Jeneration完成签到 ,获得积分10
35秒前
張医铄完成签到,获得积分10
37秒前
肥腩发布了新的文献求助10
40秒前
子阅完成签到 ,获得积分10
41秒前
qingping发布了新的文献求助10
41秒前
orixero应助冷傲冬易采纳,获得10
43秒前
vchen0621完成签到,获得积分0
47秒前
47秒前
54秒前
sandy发布了新的文献求助10
54秒前
57秒前
0000完成签到 ,获得积分10
57秒前
58秒前
谢青发布了新的文献求助10
59秒前
高分求助中
【重要!!请各位用户详细阅读此贴】科研通的精品贴汇总(请勿应助) 10000
Plutonium Handbook 1000
Three plays : drama 1000
International Code of Nomenclature for algae, fungi, and plants (Madrid Code) (Regnum Vegetabile) 1000
Semantics for Latin: An Introduction 999
Psychology Applied to Teaching 14th Edition 600
Robot-supported joining of reinforcement textiles with one-sided sewing heads 600
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 4098413
求助须知:如何正确求助?哪些是违规求助? 3636138
关于积分的说明 11524910
捐赠科研通 3346240
什么是DOI,文献DOI怎么找? 1839088
邀请新用户注册赠送积分活动 906496
科研通“疑难数据库(出版商)”最低求助积分说明 823763