化学信息学
判别式
计算机科学
变压器
机器学习
水准点(测量)
人工智能
数据挖掘
工程类
生物信息学
电压
大地测量学
地理
电气工程
生物
作者
Ross Irwin,Spyridon Dimitriadis,Jiazhen He,Esben Jannik Bjerrum
标识
DOI:10.1088/2632-2153/ac3ffb
摘要
Abstract Transformer models coupled with a simplified molecular line entry system (SMILES) have recently proven to be a powerful combination for solving challenges in cheminformatics. These models, however, are often developed specifically for a single application and can be very resource-intensive to train. In this work we present the Chemformer model—a Transformer-based model which can be quickly applied to both sequence-to-sequence and discriminative cheminformatics tasks. Additionally, we show that self-supervised pre-training can improve performance and significantly speed up convergence on downstream tasks. On direct synthesis and retrosynthesis prediction benchmark datasets we publish state-of-the-art results for top-1 accuracy. We also improve on existing approaches for a molecular optimisation task and show that Chemformer can optimise on multiple discriminative tasks simultaneously. Models, datasets and code will be made available after publication.
科研通智能强力驱动
Strongly Powered by AbleSci AI