计算机科学
图形
人工智能
元学习(计算机科学)
机器学习
作者
Feng Zhao,Donglin Wang
出处
期刊:Conference on Information and Knowledge Management
日期:2021-10-26
卷期号:: 3657-3661
标识
DOI:10.1145/3459637.3482151
摘要
In recent years, graph contrastive learning has achieved promising node classification accuracy using graph neural networks (GNNs), which can learn representations in an unsupervised manner. However, such representations cannot be generalized to unseen novel classes with only few-shot labeled samples in spite of exhibiting good performance on seen classes. In order to assign generalization capability to graph contrastive learning, we propose multimodal graph meta contrastive learning (MGMC) in this paper, which integrates multimodal meta learning into graph contrastive learning. On one hand, MGMC accomplishes effectively fast adapation on unseen novel classes by the aid of bilevel meta optimization to solve few-shot problems. On the other hand, MGMC can generalize quickly to a generic dataset with multimodal distribution by inducing the FiLM-based modulation module. In addition, MGMC incorporates the lastest graph contrastive learning method that does not rely on the onstruction of augmentations and negative examples. To our best knowledge, this is the first work to investigate graph contrastive learning for few-shot problems. Extensieve experimental results on three graph-structure datasets demonstrate the effectiveness of our proposed MGMC in few-shot node classification tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI