分割
计算机科学
人工智能
模式
可转让性
图像分割
编码(集合论)
比例(比率)
计算机视觉
机器学习
模式识别(心理学)
地图学
地理
社会学
程序设计语言
集合(抽象数据类型)
罗伊特
社会科学
作者
Haoyu Wang,Sizheng Guo,Zhongying Deng,Junlong Cheng,Tianbin Li,jinxian chen,Yanzhou Su,Ziyan Huang,Yiqing Shen,Bin Fu,Shaoting Zhang,Junjun He
标识
DOI:10.1109/tnnls.2025.3586694
摘要
Existing volumetric medical image segmentation models are typically task-specific, excelling at specific targets but struggling to generalize across anatomical structures or modalities. This limitation restricts their broader clinical use. In this article, we introduce segment anything model (SAM)-Med3D, a vision foundation model (VFM) for general-purpose segmentation on volumetric medical images. Given only a few 3-D prompt points, SAM-Med3D can accurately segment diverse anatomical structures and lesions across various modalities. To achieve this, we gather and preprocess a large-scale 3-D medical image segmentation dataset, SA-Med3D-140K, from 70 public datasets and 8K licensed private cases from hospitals. This dataset includes 22K 3-D images and 143K corresponding masks. SAM-Med3D, a promptable segmentation model characterized by its fully learnable 3-D structure, is trained on this dataset using a two-stage procedure and exhibits impressive performance on both seen and unseen segmentation targets. We comprehensively evaluate SAM-Med3D on 16 datasets covering diverse medical scenarios, including different anatomical structures, modalities, targets, and zero-shot transferability to new/unseen tasks. The evaluation demonstrates the efficiency and efficacy of SAM-Med3D, as well as its promising application to diverse downstream tasks as a pretrained model. Our approach illustrates that substantial medical resources can be harnessed to develop a general-purpose medical AI for various potential applications. Our dataset, code, and models are available at: https://github.com/uni-medical/SAM-Med3D.
科研通智能强力驱动
Strongly Powered by AbleSci AI