计算机科学
推论
可扩展性
变压器
人工智能
概率逻辑
生成模型
自回归模型
生成语法
机器学习
潜变量
编码器
财产(哲学)
条件概率分布
条件依赖
贝叶斯定理
条件概率
理论计算机科学
贝叶斯推理
训练集
许可
混合模型
算法
数据挖掘
模式识别(心理学)
隐马尔可夫模型
作者
Kwon, Bum Chul,Shapira, Ben,Raboh, Moshiko,Sethi, Shreyans,Murarka, Shruti,Morrone, Joseph A.,Hu Jianying,Suryanarayanan, Parthasarathy
出处
期刊:Cornell University - arXiv
日期:2025-11-04
标识
DOI:10.48550/arxiv.2511.02769
摘要
The chemical space of drug-like molecules is vast, motivating the development of generative models that must learn broad chemical distributions, enable conditional generation by capturing structure-property representations, and provide fast molecular generation. Meeting the objectives depends on modeling choices, including the probabilistic modeling approach, the conditional generative formulation, the architecture, and the molecular input representation. To address the challenges, we present STAR-VAE (Selfies-encoded, Transformer-based, AutoRegressive Variational Auto Encoder), a scalable latent-variable framework with a Transformer encoder and an autoregressive Transformer decoder. It is trained on 79 million drug-like molecules from PubChem, using SELFIES to guarantee syntactic validity. The latent-variable formulation enables conditional generation: a property predictor supplies a conditioning signal that is applied consistently to the latent prior, the inference network, and the decoder. Our contributions are: (i) a Transformer-based latent-variable encoder-decoder model trained on SELFIES representations; (ii) a principled conditional latent-variable formulation for property-guided generation; and (iii) efficient finetuning with low-rank adapters (LoRA) in both encoder and decoder, enabling fast adaptation with limited property and activity data. On the GuacaMol and MOSES benchmarks, our approach matches or exceeds baselines, and latent-space analyses reveal smooth, semantically structured representations that support both unconditional exploration and property-aware generation. On the Tartarus benchmarks, the conditional model shifts docking-score distributions toward stronger predicted binding. These results suggest that a modernized, scale-appropriate VAE remains competitive for molecular generation when paired with principled conditioning and parameter-efficient finetuning.
科研通智能强力驱动
Strongly Powered by AbleSci AI