计算机科学
人工智能
多任务学习
机器学习
特征学习
深度学习
利用
一般化
特征工程
自编码
特征(语言学)
编码器
财产(哲学)
任务(项目管理)
哲学
经济
数学分析
操作系统
管理
认识论
语言学
计算机安全
数学
作者
Xiaochen Zhang,Chengkun Wu,Jiacai Yi,Xiangxiang Zeng,Canqun Yang,Aiping Lü,Tingjun Hou,Dongsheng Cao
出处
期刊:Research
[American Association for the Advancement of Science]
日期:2022-01-01
卷期号:2022
被引量:26
标识
DOI:10.34133/research.0004
摘要
Accurate prediction of pharmacological properties of small molecules is becoming increasingly important in drug discovery. Traditional feature-engineering approaches heavily rely on handcrafted descriptors and/or fingerprints, which need extensive human expert knowledge. With the rapid progress of artificial intelligence technology, data-driven deep learning methods have shown unparalleled advantages over feature-engineering-based methods. However, existing deep learning methods usually suffer from the scarcity of labeled data and the inability to share information between different tasks when applied to predicting molecular properties, thus resulting in poor generalization capability. Here, we proposed a novel multitask learning BERT (Bidirectional Encoder Representations from Transformer) framework, named MTL-BERT, which leverages large-scale pre-training, multitask learning, and SMILES (simplified molecular input line entry specification) enumeration to alleviate the data scarcity problem. MTL-BERT first exploits a large amount of unlabeled data through self-supervised pretraining to mine the rich contextual information in SMILES strings and then fine-tunes the pretrained model for multiple downstream tasks simultaneously by leveraging their shared information. Meanwhile, SMILES enumeration is used as a data enhancement strategy during the pretraining, fine-tuning, and test phases to substantially increase data diversity and help to learn the key relevant patterns from complex SMILES strings. The experimental results showed that the pretrained MTL-BERT model with few additional fine-tuning can achieve much better performance than the state-of-the-art methods on most of the 60 practical molecular datasets. Additionally, the MTL-BERT model leverages attention mechanisms to focus on SMILES character features essential to target properties for model interpretability.
科研通智能强力驱动
Strongly Powered by AbleSci AI