计算机科学
人工智能
图形
特征学习
相似性(几何)
机器学习
深度学习
基因调控网络
人工神经网络
基因
代表(政治)
计算生物学
生物
理论计算机科学
基因表达
遗传学
政治
政治学
图像(数学)
法学
作者
Tianyu Liu,Yuge Wang,Rex Ying,Hongyu Zhao
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:3
标识
DOI:10.48550/arxiv.2310.02275
摘要
Discovering genes with similar functions across diverse biomedical contexts poses a significant challenge in gene representation learning due to data heterogeneity. In this study, we resolve this problem by introducing a novel model called Multimodal Similarity Learning Graph Neural Network, which combines Multimodal Machine Learning and Deep Graph Neural Networks to learn gene representations from single-cell sequencing and spatial transcriptomic data. Leveraging 82 training datasets from 10 tissues, three sequencing techniques, and three species, we create informative graph structures for model training and gene representations generation, while incorporating regularization with weighted similarity learning and contrastive learning to learn cross-data gene-gene relationships. This novel design ensures that we can offer gene representations containing functional similarity across different contexts in a joint space. Comprehensive benchmarking analysis shows our model's capacity to effectively capture gene function similarity across multiple modalities, outperforming state-of-the-art methods in gene representation learning by up to 97.5%. Moreover, we employ bioinformatics tools in conjunction with gene representations to uncover pathway enrichment, regulation causal networks, and functions of disease-associated or dosage-sensitive genes. Therefore, our model efficiently produces unified gene representations for the analysis of gene functions, tissue functions, diseases, and species evolution.
科研通智能强力驱动
Strongly Powered by AbleSci AI