序列(生物学)
嵌入
计算机科学
代数结构
蛋白质结构预测
算法
蛋白质结构
人工智能
理论计算机科学
数学
物理
纯数学
生物
遗传学
核磁共振
作者
Amy X. Lu,Wilson Yan,Kevin Yang,Vladimir Gligorijević,Kyunghyun Cho,Pieter Abbeel,Richard Bonneau,Nathan C. Frey
标识
DOI:10.1101/2024.08.06.606920
摘要
Abstract Existing protein machine learning representations typically model either the sequence or structure distribution, with the other modality implicit. The latent space of sequence-to-structure prediction models such as ESMFold represents the joint distribution of sequence and structure; however, we find these embeddings to exhibit massive activations, whereby some channels have values 3000× higher than others, regardless of the input. Further, on continuous compression schemes, ESMFold embeddings can be reduced by a factor of 128× along the channel and 8× along the length, while retaining structure information at <2Å scale accuracy, and performing competitively on protein function and localization benchmarks. On discrete compression schemes, we construct a tokenized all-atom structure vocabulary that retains high reconstruction accuracy, thus introducing a tokenized representation of all-atom structure that can be obtained from sequence alone . We term this series of embeddings as CHEAP (Compressed Hourglass Embedding Adaptations of Proteins) embeddings, obtained via the HPCT (Hourglass Protein Compression Transformer) architecture. CHEAP is a compact representation of both protein structure and sequence, sheds light on information content asymmetries between sequence and structure, democratizes representations captured by large models, and is designed to have flexible downstream applications such as generation, search, and prediction.
科研通智能强力驱动
Strongly Powered by AbleSci AI