医学
分割
放射科
深度学习
超声波
人工智能
医学物理学
计算机科学
作者
Tiziano Natali,Andrey Zhylka,Karin A Olthof,Jan Maerten Smit,T. R. Baetens,Niels F. M. Kok,Koert Kuhlmann,Oleksandra Ivashchenko,Theo J.M. Ruers,Matteo Fusaglia
标识
DOI:10.1117/1.jmi.11.2.024501
摘要
PurposeTraining and evaluation of the performance of a supervised deep-learning model for the segmentation of hepatic tumors from intraoperative US (iUS) images, with the purpose of improving the accuracy of tumor margin assessment during liver surgeries and the detection of lesions during colorectal surgeries.ApproachIn this retrospective study, a U-Net network was trained with the nnU-Net framework in different configurations for the segmentation of CRLM from iUS. The model was trained on B-mode intraoperative hepatic US images, hand-labeled by an expert clinician. The model was tested on an independent set of similar images. The average age of the study population was 61.9 ± 9.9 years. Ground truth for the test set was provided by a radiologist, and three extra delineation sets were used for the computation of inter-observer variability.ResultsThe presented model achieved a DSC of 0.84 (p=0.0037), which is comparable to the expert human raters scores. The model segmented hypoechoic and mixed lesions more accurately (DSC of 0.89 and 0.88, respectively) than hyper- and isoechoic ones (DSC of 0.70 and 0.60, respectively) only missing isoechoic or >20 mm in diameter (8% of the tumors) lesions. The inclusion of extra margins of probable tumor tissue around the lesions in the training ground truth resulted in lower DSCs of 0.75 (p=0.0022).ConclusionThe model can accurately segment hepatic tumors from iUS images and has the potential to speed up the resection margin definition during surgeries and the detection of lesion in screenings by automating iUS assessment.
科研通智能强力驱动
Strongly Powered by AbleSci AI