青光眼
人工智能
视网膜
计算机科学
医学
计算机视觉
眼科
模式识别(心理学)
验光服务
作者
Parmita Mehta,Christine A. Petersen,Joanne C. Wen,Michael R. Banitt,Philip Chen,Karine D. Bojikian,Catherine Egan,Su‐In Lee,Magdalena Balazinska,Aaron Lee,Ariel Rokem
标识
DOI:10.1016/j.ajo.2021.04.021
摘要
Purpose To develop a multimodal model to automate glaucoma detection Design Development of a machine-learning glaucoma detection model Methods We selected a study cohort from the UK Biobank data set with 1193 eyes of 863 healthy subjects and 1283 eyes of 771 subjects with glaucoma. We trained a multimodal model that combines multiple deep neural nets, trained on macular optical coherence tomography volumes and color fundus photographs, with demographic and clinical data. We performed an interpretability analysis to identify features the model relied on to detect glaucoma. We determined the importance of different features in detecting glaucoma using interpretable machine learning methods. We also evaluated the model on subjects who did not have a diagnosis of glaucoma on the day of imaging but were later diagnosed (progress-to-glaucoma [PTG]). Results Results show that a multimodal model that combines imaging with demographic and clinical features is highly accurate (area under the curve 0.97). Interpretation of this model highlights biological features known to be related to the disease, such as age, intraocular pressure, and optic disc morphology. Our model also points to previously unknown or disputed features, such as pulmonary function and retinal outer layers. Accurate prediction in PTG highlights variables that change with progression to glaucoma—age and pulmonary function. Conclusions The accuracy of our model suggests distinct sources of information in each imaging modality and in the different clinical and demographic variables. Interpretable machine learning methods elucidate subject-level prediction and help uncover the factors that lead to accurate predictions, pointing to potential disease mechanisms or variables related to the disease. To develop a multimodal model to automate glaucoma detection Development of a machine-learning glaucoma detection model We selected a study cohort from the UK Biobank data set with 1193 eyes of 863 healthy subjects and 1283 eyes of 771 subjects with glaucoma. We trained a multimodal model that combines multiple deep neural nets, trained on macular optical coherence tomography volumes and color fundus photographs, with demographic and clinical data. We performed an interpretability analysis to identify features the model relied on to detect glaucoma. We determined the importance of different features in detecting glaucoma using interpretable machine learning methods. We also evaluated the model on subjects who did not have a diagnosis of glaucoma on the day of imaging but were later diagnosed (progress-to-glaucoma [PTG]). Results show that a multimodal model that combines imaging with demographic and clinical features is highly accurate (area under the curve 0.97). Interpretation of this model highlights biological features known to be related to the disease, such as age, intraocular pressure, and optic disc morphology. Our model also points to previously unknown or disputed features, such as pulmonary function and retinal outer layers. Accurate prediction in PTG highlights variables that change with progression to glaucoma—age and pulmonary function. The accuracy of our model suggests distinct sources of information in each imaging modality and in the different clinical and demographic variables. Interpretable machine learning methods elucidate subject-level prediction and help uncover the factors that lead to accurate predictions, pointing to potential disease mechanisms or variables related to the disease.
科研通智能强力驱动
Strongly Powered by AbleSci AI