计算机科学
嵌入
利用
知识图
文字2vec
图形
稠密图
理论计算机科学
稀疏矩阵
依赖关系(UML)
机器学习
人工智能
折线图
物理
高斯分布
1-平面图
量子力学
计算机安全
作者
Xinglan Liu,Hussain Musa Hussain,Houssam Razouk,Roman Kern
标识
DOI:10.1145/3477314.3507031
摘要
Graph embedding methods have emerged as effective solutions for knowledge graph completion. However, such methods are typically tested on benchmark datasets such as Freebase, but show limited performance when applied on sparse knowledge graphs with orders of magnitude lower density. To compensate for the lack of structure in a sparse graph, low dimensional representations of textual information such as word2vec or BERT embeddings have been used. This paper proposes a BERT-based method (BERT-ConvE), to exploit transfer learning of BERT in combination with a convolutional network model ConvE. Comparing to existing text-aware approaches, we effectively make use of the context dependency of BERT embeddings through optimizing the features extraction strategies. Experiments on ConceptNet show that the proposed method outperforms strong baselines by 50% on knowledge graph completion tasks. The proposed method is suitable for sparse graphs as also demonstrated by empirical studies on ATOMIC and sparsified-FB15k-237 datasets. Its effectiveness and simplicity make it appealing for industrial applications.
科研通智能强力驱动
Strongly Powered by AbleSci AI