可解释性
计算机科学
人工智能
基因表达谱
仿形(计算机编程)
计算生物学
基因表达
基因
模式识别(心理学)
机器学习
生物
遗传学
操作系统
作者
Mara Graziani,Niccolò Marini,Nicolas Deutschmann,Nikita Janakarajan,Henning Müller,María Rodríguez Martínez
标识
DOI:10.1007/978-3-031-17976-1_5
摘要
AbstractInterpretability of deep learning is widely used to evaluate the reliability of medical imaging models and reduce the risks of inaccurate patient recommendations. For models exceeding human performance, e.g. predicting RNA structure from microscopy images, interpretable modelling can be further used to uncover highly non-trivial patterns which are otherwise imperceptible to the human eye. We show that interpretability can reveal connections between the microscopic appearance of cancer tissue and its gene expression profiling. While exhaustive profiling of all genes from the histology images is still challenging, we estimate the expression values of a well-known subset of genes that is indicative of cancer molecular subtype, survival, and treatment response in colorectal cancer. Our approach successfully identifies meaningful information from the image slides, highlighting hotspots of high gene expression. Our method can help characterise how gene expression shapes tissue morphology and this may be beneficial for patient stratification in the pathology unit. The code is available on GitHub.KeywordsInterpretabilityHistopathologyTranscriptomicsAttention
科研通智能强力驱动
Strongly Powered by AbleSci AI