计算机科学
语言模型
推论
人工智能
概化理论
机器学习
数据挖掘
理论计算机科学
数学
统计
作者
Heqin Zhu,Ruifeng Li,Feng Zhang,Fenghe Tang,Tong Ye,Xin Li,Yunjie Gu,Peng Xiong,S. Kevin Zhou
标识
DOI:10.1101/2025.08.06.668731
摘要
Abstract RNA language models have achieved strong performances across diverse down-stream tasks by leveraging large-scale sequence data. However, RNA function is fundamentally shaped by its hierarchical structure, making the integration of structural information into pre-training essential. Existing methods often depend on noisy structural annotations or introduce task-specific biases, limiting model generalizability. Here, we introduce structRFM, a structure-guided RNA foundation model that is pre-trained on millions of RNA sequences and secondary structures data by integrating base pairing interactions into masked language modeling through a novel pair matching operation. The structure-guided mask and nucleotide-level mask are further balanced by a dynamic masking ratio. structRFM learns joint knowledge of sequential and structural data, producing versatile representations, including classification-level, sequence-level, and pair-wise matrix features, that support a broad spectrum of downstream adaptations. structRFM ranks among the top models in zero-shot homology classification across fifteen biological language models, and sets new benchmarks for secondary structure prediction. structRFM further derives Zfold, which enables robust and reliable tertiary structure prediction, with consistent wimprovements in estimating 3D structures and their accordingly extracted 2D structures, achieving a pronounced 19% performance gain compared with AlphaFold3 on RNA Puzzles dataset. In functional tasks such as internal ribosome entry site identification, structRFM achieves a whopping 49% performance gain in F1 score. These results demonstrate the effectiveness of structure-guided pre-training and highlight a promising direction for developing multi-modal RNA language models in computational biology. To support the broader scientific community, we have made the 21-million sequence-structure dataset and the pre-trained structRFM model fully open-source, facilitating the development of multimodal foundation models in biology.
科研通智能强力驱动
Strongly Powered by AbleSci AI