亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Exploring Prototype-Anchor Contrast for Semantic Segmentation

计算机科学 分割 对比度(视觉) 人工智能 图像分割 计算机视觉 自然语言处理
作者
Qinghua Ren,Shijian Lu,Qirong Mao,Ming Dong
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology [Institute of Electrical and Electronics Engineers]
卷期号:34 (8): 7106-7120 被引量:1
标识
DOI:10.1109/tcsvt.2024.3370570
摘要

Pixel-wise contrastive learning recently offers a new training paradigm in semantic segmentation by directly shaping the pixel embedding space. Compared with pixel-pixel contrast that often requires large memory and high computation cost, pixel-prototype contrast exploits the semantic correlations among pixels in a more efficient way by pulling positive pixel-prototype pairs close and pushing negative pairs apart. However, most existing work treats pixels as anchors to form contrast, either failing to capture the intra-class variance or introducing extra computational overhead. In this work, we propose Prototype-Anchor Contrast (ProAC), a novel prototypical contrastive learning paradigm that strengthens pixel-prototype associations in a simple yet effective fashion. First, ProAC pre-defines class prototypes (serving as cluster centroids) by exploiting the uniformity on the hypersphere in the feature space and thus requires no prototype updating during network optimization, which greatly simplifies the network training process. Second, by treating prototypes as anchors, ProAC builds a novel prototype-to-pixel learning path, where a large amount of negative pixels can naturally be generated to describe rich semantic information without relying on auxiliary sample augmentation techniques. Finally, as a plug-and-play regularization term, ProAC can be attached to most existing segmentation models and assist the network optimization by directly shaping the pixel embedding space. Extensive experiments on different benchmarks show that our ProAC brings an mIoU increase from 1.4% to 2.0% for fully-supervised models and from 0.9% to 6.0% for domain-adaptive models, respectively. It also leads to a gain of mIoU, ranging from 1.8% to 2.7% in more challenging cases, including different resolutions, diverse illuminations and masked scenarios.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
科研通AI6应助伊力扎提采纳,获得10
刚刚
彭于晏应助徐风年采纳,获得10
1秒前
4秒前
小麻哥完成签到,获得积分10
10秒前
十you八九发布了新的文献求助10
10秒前
15秒前
大模型应助正正采纳,获得10
18秒前
oleskarabach发布了新的文献求助10
18秒前
23秒前
科目三应助十you八九采纳,获得10
26秒前
甜筒发布了新的文献求助10
26秒前
27秒前
正正完成签到,获得积分10
28秒前
科研通AI5应助科研通管家采纳,获得10
28秒前
lazysheep完成签到,获得积分10
29秒前
今后应助Amuro采纳,获得10
29秒前
伊力扎提发布了新的文献求助10
29秒前
正正发布了新的文献求助10
32秒前
顾矜应助甜筒采纳,获得10
32秒前
35秒前
搜集达人应助lucky采纳,获得10
38秒前
Jasper应助雷含灵采纳,获得10
38秒前
FIN发布了新的文献求助60
38秒前
41秒前
徐风年发布了新的文献求助10
42秒前
伊力扎提完成签到,获得积分10
42秒前
称心的火车完成签到 ,获得积分10
44秒前
44秒前
ceeray23发布了新的文献求助20
48秒前
49秒前
雷含灵发布了新的文献求助10
50秒前
坚强的冰淇淋应助FIN采纳,获得60
59秒前
CC发布了新的文献求助10
59秒前
1分钟前
1分钟前
Owen应助anqi6688采纳,获得10
1分钟前
科研通AI5应助CC采纳,获得10
1分钟前
1分钟前
粥粥大王发布了新的文献求助10
1分钟前
小二郎应助137采纳,获得10
1分钟前
高分求助中
(应助此贴封号)【重要!!请各位详细阅读】【科研通的精品贴汇总】 10000
Les Mantodea de Guyane: Insecta, Polyneoptera [The Mantids of French Guiana] 3000
F-35B V2.0 How to build Kitty Hawk's F-35B Version 2.0 Model 2500
줄기세포 생물학 1000
The Netter Collection of Medical Illustrations: Digestive System, Volume 9, Part III - Liver, Biliary Tract, and Pancreas (3rd Edition) 600
INQUIRY-BASED PEDAGOGY TO SUPPORT STEM LEARNING AND 21ST CENTURY SKILLS: PREPARING NEW TEACHERS TO IMPLEMENT PROJECT AND PROBLEM-BASED LEARNING 500
2025-2031全球及中国蛋黄lgY抗体行业研究及十五五规划分析报告(2025-2031 Global and China Chicken lgY Antibody Industry Research and 15th Five Year Plan Analysis Report) 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 4483966
求助须知:如何正确求助?哪些是违规求助? 3939863
关于积分的说明 12220011
捐赠科研通 3595286
什么是DOI,文献DOI怎么找? 1977156
邀请新用户注册赠送积分活动 1014270
科研通“疑难数据库(出版商)”最低求助积分说明 907386