计算机科学
分割
计算机视觉
人工智能
医学影像学
图像分割
尺度空间分割
比例(比率)
地理
地图学
作者
Yan Pang,Jiaming Liang,Teng Huang,Hao Chen,Yunhao Li,Dan Li,Lin Huang,Qiong Wang
标识
DOI:10.1109/tmi.2023.3326188
摘要
Hybrid transformer-based segmentation approaches have shown great promise in medical image analysis. However, they typically require considerable computational power and resources during both training and inference stages, posing a challenge for resource-limited medical applications common in the field. To address this issue, we present an innovative framework called Slim UNETR, designed to achieve a balance between accuracy and efficiency by leveraging the advantages of both convolutional neural networks and transformers. Our method features the Slim UNETR Block as a core component, which effectively enables information exchange through self-attention mechanism decomposition and cost-effective representation aggregation. Additionally, we utilize the throughput metric as an efficiency indicator to provide feedback on model resource consumption. Our experiments demonstrate that Slim UNETR outperforms state-of-the-art models in terms of accuracy, model size, and efficiency when deployed on resource-constrained devices. Remarkably, Slim UNETR achieves 92.44% dice accuracy on BraTS2021 while being 34.6x smaller and 13.4x faster during inference compared to Swin UNETR. Code: https://github.com/aigzhusmart/Slim-UNETR
科研通智能强力驱动
Strongly Powered by AbleSci AI