Prediction of the gastric precancerous risk based on deep learning of multimodal medical images
深度学习
人工智能
计算机科学
作者
Changzheng Ma,Peng Zhang,Shiyu Du,Shao Li
出处
期刊:Research Square - Research Square日期:2024-07-18被引量:1
标识
DOI:10.21203/rs.3.rs-4747833/v1
摘要
Abstract Effective warning diverse gastritis lesions, including precancerous lesions of gastric cancer (PLGC) and Non-PLGC, and progression risks, are pivotal for early prevention of gastric cancer. An attention-based model (Attention-GT) was constructed. It integrated multimodal features such as gastroscopic, tongue images, and clinicopathological indicators (Age, Gender, Hp) for the first time to assist in distinguishing diverse gastritis lesions and progression risks. A longitudinal cohort of 384 participants with gastritis (206 Non-PLGC and 178 PLGC) was constructed. These two baseline groups were subdivided into progressive (Pro) and Non-Pro groups, respectively, based on a mean follow-up of 3.3 years. The Attention-GT model exhibited excellent performance in distinguishing diverse gastritis lesions and progression risks. It was found that the AUC of Attention-GT in distinguishing PLGC was 0.83, significantly higher than that of clinicopathological indicators (AUC = 0.72, p < 0.01). Importantly, for the patients with baseline lesions as Non-PLGC, the AUC of Attention-GT in distinguishing the Pro group was 0.84, significantly higher than that of clinicopathological indicators (AUC = 0.67, p < 0.01), demonstrating the value of the fusion of gastroscopic and tongue images in predicting the progression risk of gastritis. Finally, morphological features related to diverse gastritis lesions and progression risk, respectively, were identified in both gastroscopic and tongue images through interpretability analysis. Collectively, our study has demonstrated the value of integrating multimodal data of medical images in assisting prediction of diverse gastritis lesions and progression risks, paving a new way for early gastric cancer risk prediction.