计算机科学
人工智能
比例(比率)
情态动词
模式识别(心理学)
分割
图像(数学)
融合
计算机视觉
图像融合
地图学
地理
语言学
哲学
化学
高分子化学
作者
Xianzhu Pan,Xuemei Xie,Jianxiu Yang
出处
期刊:Neurocomputing
[Elsevier BV]
日期:2024-10-28
卷期号:614: 128793-128793
被引量:7
标识
DOI:10.1016/j.neucom.2024.128793
摘要
Referring image segmentation aims to segment the target by a given language expression. Recently, the bottom-up fusion network utilizes language features to highlight the most relevant regions during the visual encoder stage. However, it is not comprehensive that establish only the relationship between pixels and words. To alleviate this problem, we propose a mixed-scale cross-modal fusion method that widens the interaction between vision and language. Specially, at each stage, pyramid pooling is used to augment visual perception and improve the interaction between visual and linguistic features, thereby highlighting relevant regions in the visual data. Additionally, we employ a simple multi-scale feature fusion module to effectively combine multi-scale aligned features. Experiments conducted on Standard RIS benchmarks demonstrate that the proposed method achieves favorable performance against state-of-the- art approaches. Moreover, we conducted experiments on different visual backbones respectively, and the proposed method yielded better and significantly improved performance results.
科研通智能强力驱动
Strongly Powered by AbleSci AI