自编码
计算机科学
分割
人工智能
计算机视觉
图像分割
模式识别(心理学)
人工神经网络
作者
Xiaoqiang Shi,Zhenyu Yin,Guangjie Han,Wenzhuo Liu,Li Qin,Yuanguo Bi,Shurui Li
标识
DOI:10.1109/tcsvt.2023.3325360
摘要
Although semantic segmentation methods have made remarkable progress so far, their long inference process limits their use in practical applications. Recently, some two-branch and three-branch real-time segmentation networks have been proposed to improve segmentation accuracy by adding branches to extract spatial or border information. For the design of extracting spatial information branches, preserving high-resolution features or adding segmentation loss to guide spatial branches are commonly used methods to extract spatial information. However, these approaches are not the most efficient. To solve the problem, we design the spatial information extraction branch as an AutoEncoder structure, which allows us to extract the spatial structure and features of the image during the encoding and decoding process of the AutoEncoder. Border, semantic and spatial information are all helpful for segmentation tasks, and efficiently fusing these three kinds of information can obtain better feature representation compared to the fusion of two types of information in the dual-branch network. However, existing three-branch networks have yet to explore this aspect deeply. Therefore, this paper designs a new three-branch network based on this starting point. In addition, we also propose a feature fusion module called the Unified Multi-Feature Fusion module (UMF), which can fuse multiple features efficiently. Our method achieves a state-of-the-art trade-off between inference speed and accuracy on the Cityscapes, CamVid, and NightCity datasets. Specifically, BSSNet-T achieves 78.8% mIoU at 115.8 FPS on the Cityscapes dataset, 79.5% mIoU at 170.8 FPS on the CamVid dataset, and 52.6% mIoU at 172.3 FPS on the NightCity dataset. Code is available at https://github.com/SXQ-STUDY/BSSNet.
科研通智能强力驱动
Strongly Powered by AbleSci AI