豪斯多夫距离
分割
人工智能
锥束ct
计算机科学
一致性(知识库)
深度学习
核医学
模式识别(心理学)
计算机断层摄影术
医学
放射科
作者
Pieter-Jan Verhelst,A. Smolders,Thomas Beznik,Jeroen Meewis,Arne Vandemeulebroucke,Eman Shaheen,Adriaan Van Gerven,Holger Willems,Constantinus Politis,Reinhilde Jacobs
标识
DOI:10.1016/j.jdent.2021.103786
摘要
To develop and validate a layered deep learning algorithm which automatically creates three-dimensional (3D) surface models of the human mandible out of cone-beam computed tomography (CBCT) imaging.Two convolutional networks using a 3D U-Net architecture were combined and deployed in a cloud-based artificial intelligence (AI) model. The AI model was trained in two phases and iteratively improved to optimize the segmentation result using 160 anonymized full skull CBCT scans of orthognathic surgery patients (70 preoperative scans and 90 postoperative scans). Finally, the final AI model was tested by assessing timing, consistency, and accuracy on a separate testing dataset of 15 pre- and 15 postoperative full skull CBCT scans. The AI model was compared to user refined AI segmentations (RAI) and to semi-automatic segmentation (SA), which is the current clinical standard. The time needed for segmentation was measured in seconds. Intra- and inter-operator consistency were assessed to check if the segmentation protocols delivered reproducible results. The following consistency metrics were used: intersection over union (IoU), dice similarity coefficient (DSC), Hausdorff distance (HD), absolute volume difference and root mean square (RMS) distance. To evaluate the match of the AI and RAI results to those of the SA method, their accuracy was measured using IoU, DSC, HD, absolute volume difference and RMS distance.On average, SA took 1218.4s. RAI showed a significant drop (p<0.0001) in timing to 456.5s (2.7-fold decrease). The AI method only took 17s (71.3-fold decrease). The average intra-operator IoU for RAI was 99.5% compared to 96.9% for SA. For inter-operator consistency, RAI scored an IoU of 99.6% compared to 94.6% for SA. The AI method was always consistent by default. In both the intra- and inter-operator consistency assessments, RAI outperformed SA on all metrics indicative of better consistency. With SA as the ground truth, AI and RAI scored an IoU of 94.6% and 94.4%, respectively. All accuracy metrics were similar for AI and RAI, meaning that both methods produce 3D models that closely match those produced by SA.A layered 3D U-Net architecture deep learning algorithm, with and without additional user refinements, improves time-efficiency, reduces operator error, and provides excellent accuracy when benchmarked against the clinical standard.Semi-automatic segmentation in CBCT imaging is time-consuming and allows user-induced errors. Layered convolutional neural networks using a 3D U-Net architecture allow direct segmentation of high-resolution CBCT images. This approach creates 3D mandibular models in a more time-efficient and consistent way. It is accurate when benchmarked to semi-automatic segmentation.
科研通智能强力驱动
Strongly Powered by AbleSci AI