计算机科学
情态动词
变压器
目标检测
人工智能
模式识别(心理学)
电压
电气工程
工程类
化学
高分子化学
作者
Ruohao Guo,Xianghua Ying,Yanyu Qi,Liao Qu
标识
DOI:10.1109/tmm.2024.3369922
摘要
Recent years have witnessed a growing interest in co-object segmentation and multi-modal salient object detection. Many efforts are devoted to segmenting co-existed objects among a group of images or detecting salient objects from different modalities. Albeit the appreciable performance achieved on respective benchmarks, each of these methods is limited to a specific task and cannot be generalized to other tasks. In this paper, we develop a Uni fied TR ansformer-based framework, namely UniTR , aiming at tackling the above tasks individually with a unified architecture. Specifically, a transformer module (CoFormer) is introduced to learn the consistency of relevant objects or complementarity from different modalities. To generate high-quality segmentation maps, we adopt a dual-stream decoding paradigm that allows the extracted consistent or complementary information to better guide mask prediction. Moreover, a feature fusion module (ZoomFormer) is designed to enhance backbone features and capture multi-granularity and multi-semantic information. Extensive experiments show that our UniTR performs well on 17 benchmarks , and surpasses existing state-of-the-art approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI