分割
计算机科学
人工智能
计算机视觉
编码(集合论)
模式识别(心理学)
任务(项目管理)
适应(眼睛)
胸腔
医学
解剖
集合(抽象数据类型)
程序设计语言
物理
管理
光学
经济
作者
Jing-Yu Zhao,Ziwei Nie,Jizhong Shen,Jun He,Xiaoping Yang
出处
期刊:Biomedical Physics & Engineering Express
[IOP Publishing]
日期:2023-12-29
卷期号:10 (1): 015021-015021
标识
DOI:10.1088/2057-1976/ad1663
摘要
Abstract Rib segmentation in 2D chest x-ray images is a crucial and challenging task. On one hand, chest x-ray images serve as the most prevalent form of medical imaging due to their convenience, affordability, and minimal radiation exposure. However, on the other hand, these images present intricate challenges including overlapping anatomical structures, substantial noise and artifacts, inherent anatomical complexity. Currently, most methods employ deep convolutional networks for rib segmentation, necessitating an extensive quantity of accurately labeled data for effective training. Nonetheless, achieving precise pixel-level labeling in chest x-ray images presents a notable difficulty. Additionally, many methods neglect the challenge of predicting fractured results and subsequent post-processing difficulties. In contrast, CT images benefit from being able to directly label as the 3D structure and patterns of organs or tissues. In this paper, we redesign rib segmentation task for chest x-ray images and propose a concise and efficient cross-modal method based on unsupervised domain adaptation with centerline loss function to prevent result discontinuity and address rigorous post-processing. We utilize digital reconstruction radiography images and the labels generated from 3D CT images to guide rib segmentation on unlabeled 2D chest x-ray images. Remarkably, our model achieved a higher dice score on the test samples and the results are highly interpretable, without requiring any annotated rib markings on chest x-ray images. Our code and demo will be released in ‘ https://github.com/jialin-zhao/RibsegBasedonUDA ’.
科研通智能强力驱动
Strongly Powered by AbleSci AI