特征(语言学)
人工智能
突出
计算机科学
代表(政治)
水准点(测量)
语义特征
对象(语法)
模式识别(心理学)
特征学习
哲学
语言学
大地测量学
政治
政治学
法学
地理
作者
Yanliang Ge,Qiao Zhang,Tian-Zhu Xiang,Cong Zhang,Jing Zhang,Hongbo Bi
标识
DOI:10.1016/j.cviu.2022.103611
摘要
The critical challenge for co-salient object detection (CoSOD) is to extract common saliency information from a group of relevant images. Most of the existing CoSOD methods do not fully explore the semantic commonality of co-salient objects, which can be strong guidance for collaborative feature learning, and do not take full advantage of rich hierarchical features of different layers, resulting in inferior performance. To this end, we propose a Group Semantic-guided Neighbor interaction network (GSNNet) for co-salient object detection. Specifically, the proposed network contains the group semantic module (GSM), neighbor interaction module (NIM), and feature enhancement module (FEM). The network first learns semantic consensus from a group of relevant images by the GSM, which uses the reverse guidance strategy and the group-wise combination strategy to distill the group semantic cues from the forward and complementary features. With the powerful guidance of group semantic, the NIM is employed to conduct the neighbor feature interaction of adjacent layers to excavate the contextual information and enhance feature representation. Then the FEM is adopted to refine the critical cues by the attention mechanism, which enhances the compactness of feature representation. The proposed GSNNet is evaluated on three challenging CoSOD benchmark datasets using four widely-used metrics, which demonstrates that our proposed method is superior to the other twelve cutting-edge methods for co-salient object detection.
科研通智能强力驱动
Strongly Powered by AbleSci AI