计算机科学
情态动词
变压器
图形
代表(政治)
理论计算机科学
人工智能
电压
化学
物理
量子力学
政治
政治学
高分子化学
法学
作者
Xiao Liang,Erkun Yang,Cheng Deng,Yanhua Yang
摘要
Transformers have been recognized as powerful tools for various cross-modal tasks due to their superior ability to perform representation learning through self-attention. Existing transformer-based cross-modal models can be categorized into single-stream and dual-stream ones. By performing fine-grained interaction with self-attention on the cross-modal concatenated features, the former can simultaneously learn intra- and inter-modal correlations. However, this simple concatenation treats the inputs of different modalities equally; as a result, the heterogeneous differences between modalities are ignored, leading to a modality gap. The latter process the inputs of different modalities separately, then perform cross-modal interaction on the subsequently fused networks, resulting in a failure to integrate the fine-grained correlations of both intra- and inter-modality in a uniform module. To this end, we propose an effective heterogeneous graph transformer for dual-stream cross-modal representation learning, named CrossFormer, which constructs a heterogeneous graph as a bridge to achieve fine-grained intra- and inter-modal interaction on a dual-stream network. Specifically, we first represent multi-modal data with a heterogeneous graph, then develop a dual-positional encoding strategy that enables the heterogeneous graph to obtain the relative positional information. Finally, a dual-stream self-attention is performed on the heterogeneous graph, bridging the gap between modalities and effectively capturing fine-grained intra- and inter-modal interactions simultaneously. Extensive experiments on various cross-modal tasks demonstrate the superiority of our method.
科研通智能强力驱动
Strongly Powered by AbleSci AI