计算机科学
可扩展性
嵌入
化学信息学
人工智能
机器学习
图形
领域知识
财产(哲学)
领域(数学分析)
相似性(几何)
理论计算机科学
化学
数学
数据库
计算化学
图像(数学)
认识论
数学分析
哲学
作者
Rahul Sheshanarayana,Fengqi You
出处
期刊:Advanced Science
[Wiley]
日期:2025-04-09
卷期号:12 (22): e2503271-e2503271
被引量:1
标识
DOI:10.1002/advs.202503271
摘要
Abstract Knowledge distillation (KD) is a powerful model compression technique that transfers knowledge from complex teacher models to compact student models, reducing computational costs while preserving predictive accuracy. This study investigated KD's efficacy in molecular property prediction across domain‐specific and cross‐domain tasks, leveraging state‐of‐the‐art graph neural networks (SchNet, DimeNet++, and TensorNet). In the domain‐specific setting, KD improved regression performance across diverse quantum mechanical properties in the QM9 dataset, with DimeNet++ student models achieving up to an 90% improvement in compared to non‐KD baselines. Notably, in certain cases, smaller student models achieved comparable or even superior improvements while being 2× smaller, highlighting KD's ability to enhance efficiency without sacrificing predictive performance. Cross‐domain evaluations further demonstrated KD's adaptability, where embeddings from QM9‐trained teacher models enhanced predictions for ESOL (log S ) and FreeSolv (Δ G hyd ), with SchNet exhibiting the highest gains of ≈65% in log S predictions. Embedding analysis revealed substantial student‐teacher alignment gains, with the relative shift in cosine similarity distribution peaks reaching up to 1.0 across student models. These findings highlighted KD as a robust strategy for enhancing molecular representation learning, with implications for cheminformatics, materials science, and drug discovery.
科研通智能强力驱动
Strongly Powered by AbleSci AI