计算机科学
模式
嵌入
变压器
人工智能
情态动词
机器学习
代表(政治)
财产(哲学)
图嵌入
分子图
图形
理论计算机科学
工程类
哲学
社会学
政治学
电压
化学
高分子化学
电气工程
法学
认识论
政治
社会科学
作者
Ke Wang,Wei Zhang,Yong Liu
标识
DOI:10.1109/bibm58861.2023.10385395
摘要
Molecular property prediction plays a crucial role in drug screening and discovery scenarios. The critical task of it is to obtain the embedding of effective molecular structures. Textual sequences and graphs are commonly used to describe molecules. Previous efforts have attempted to combine these modalities to address the issue of information loss in single-modal representations across diverse tasks. Therefore, integrating chemical information from different modalities should be considered for more accurate representations. Given the advantages of Transformers in various fields of artificial intelligence, leveraging the attention mechanism to integrate molecular sequence and graph representations is desirable for achieving improved molecular embeddings. To this end, we propose a deep learning model called MJAF and design a novel information fusion strategy based on joint attention mechanisms. This approach effectively harnesses the strengths of both molecular representation modalities, significantly enhancing the efficiency of embedding molecules. We conducted multiple experiments comparing our model with state-of-the-art methods, experimental results on 4 independent datasets demonstrate significant advancements achieved by our proposed model.
科研通智能强力驱动
Strongly Powered by AbleSci AI