内镜超声检查
人工智能
医学诊断
医学
肝病学
诊断准确性
外科肿瘤学
曲线下面积
放射科
计算机科学
内窥镜检查
内科学
药代动力学
作者
Ryotaro Uema,Yoshito Hayashi,Takashi Kizu,Takumi Igura,Hideharu Ogiyama,Takuya Yamada,Risato Takeda,Kengo Nagai,Takeshi Inoue,Masashi Yamamoto,Shinjiro Yamaguchi,Takashi Kanesaka,Takeo Yoshihara,Mototsugu Kato,Shunsuke Yoshii,Yoshiki Tsujii,Shinichiro Shinzaki,Tetsuo Takehara
标识
DOI:10.1007/s00535-024-02102-1
摘要
Abstract Background We developed an artificial intelligence (AI)-based endoscopic ultrasonography (EUS) system for diagnosing the invasion depth of early gastric cancer (EGC), and we evaluated the performance of this system. Methods A total of 8280 EUS images from 559 EGC cases were collected from 11 institutions. Within this dataset, 3451 images (285 cases) from one institution were used as a development dataset. The AI model consisted of segmentation and classification steps, followed by the CycleGAN method to bridge differences in EUS images captured by different equipment. AI model performance was evaluated using an internal validation dataset collected from the same institution as the development dataset (1726 images, 135 cases). External validation was conducted using images collected from the other 10 institutions (3103 images, 139 cases). Results The area under the curve (AUC) of the AI model in the internal validation dataset was 0.870 (95% CI: 0.796–0.944). Regarding diagnostic performance, the accuracy/sensitivity/specificity values of the AI model, experts ( n = 6), and nonexperts ( n = 8) were 82.2/63.4/90.4%, 81.9/66.3/88.7%, and 68.3/60.9/71.5%, respectively. The AUC of the AI model in the external validation dataset was 0.815 (95% CI: 0.743–0.886). The accuracy/sensitivity/specificity values of the AI model (74.1/73.1/75.0%) and the real-time diagnoses of experts (75.5/79.1/72.2%) in the external validation dataset were comparable. Conclusions Our AI model demonstrated a diagnostic performance equivalent to that of experts.
科研通智能强力驱动
Strongly Powered by AbleSci AI