作者
Suiping Zhou,Yuru Guo,Zhiheng Liu,Suiping Zhou,Chenyang Li,Wenjie Zhang,Wenjuan Qi
摘要
ObjectWith social and economic development, more and more areas are evolving toward modern cities [1]. Road information is critical for urban planning, traffic navigation and GIS data updates [2]. Traditional manual data collection methods are inefficient. The advancement of remote sensing technology allows for the automatic extraction of road information from high resolution remote sensing images [3]-[7].MethodThis paper proposes a road extraction model for high-resolution remote sensing images based on multi-headed self-retaining mechanism and U-Net (MSAU-Net) to achieve accurate extraction of roads in high resolution remote sensing images. MSAU-Net is made up of three components: the Canny edge detection operator, the U-Net base network architecture, and the multi-head self-attention module, as shown in Fig. 1. The road edge feature information is extracted by the Canny edge detection operator [8] and inputted to the model for training and learning via convolutional operation to make the road edges smoother and more accurate.Fig. 1.The architecture of MSAU-Net.The MSAU-Net network includes an encoder, a decoder, and an intermediate multi-headed self-attention module M-SABlock. The image is first passed through the Canny edge operator to detect edges, then through the encoding convolution module to extract the image's underlying features, and finally through two downsampling blocks in sequence to reduce space size and obtain high-level features. The number of channels is doubled after each downsampling block. The result of the downsampling is then fed to the self-attention module, which summarizes the global information and produces the encoder output. Convolution is used to extract local information in the encoding module. To recover its corresponding feature images, the decoder employs three upsampling blocks. By implementing downsampling to upsampling skip architecture through the multi-headed self-attention module, the features are copied from the encoder to the decoder. Because lower-level features already contain more global information, they can proceed directly to the decoder side; however, because higher-level features have relatively less global information, M-SABlock can extract global information more effectively. This can improve the road extraction accuracy of the model.Result and DiscussionOn the CHN6-CUG dataset [9] and Massachusetts roads dataset [10], the road extraction performance of this paper's network was compared to that of FCN [11], U-Net [12], ResU-Net [13], and SAU-Net [14] networks using the same experimental settings. As can be seen in Fig. 2(a,c), when compared to other algorithms, the algorithm in this paper can effectively combine global contextual information to provide a global information-guided flow for the decoder. Simultaneously, it can dynamically select suitable sensory fields for remote sensing images of different sizes and better integrate multi-scale feature information through self-learning. As a result, it can better extract the edge contour of blurred complex roads and significantly reduce the phenomenon of over- and under-extraction. Meanwhile, the proposed algorithm outperforms other algorithms in both overall and detail performance.To realize the quantitative analysis of the network for road extraction, this paper uses Hausdorff distance (HD) [15], Precision (PRE), Dice similarity coefficient (DSC) [16], Sensitivity (SEN) and Mean intersection over union (MIOU) as evaluation metrics. As shown in Fig. 2(b), the road extraction effect of the algorithm in this paper is significantly better than FCN, U-Net, ResU-Net and SAU-Net. The average MIOU and DSC of the proposed algorithm are improved by 6.96% and 7.16%, respectively, when compared to ResU-Net. The PRE and SEN of the proposed algorithm are improved by 9.08% and 6.56%, respectively, when compared to SAU-Net.As shown in Fig. 2(d), the proposed algorithm outperforms other algorithms in these metrics and achieves the best results with an average HD, PRE, DSC, SEN and MIOU of 2.209 cm, 90.12%, 91.26%, 89.38%, and 88.74% on the test set, respectively. The average PRE and DSC of the proposed algorithm are improved by 7.78% and 8.14%, respectively, when campared to FCN.The MIOU and SEN of the proposed algorithm are improved by 5.58% and 7.41%, respectively, when compared to U-Net. It indicates that our algorithm is more accurate and efficient.Fig. 2.Experiments result. (a) Visual comparisons of road extraction results with different comparing algorithms on the CHN6-CUG dataset, (b) Road extraction metrics for CHN6-CUG dataset. (c) Visual comparisons of road extraction results with different comparing algorithms on the Massachusetts roads dataset, (d) Road extraction metrics for Massachusetts roads dataset.ConclusionThe experimental results show that the proposed algorithm can achieve accurate and effective road extraction on CHN6-CUG dataset and Massachusetts roads dataset. In the future, we will reduce the number of parameters of the model and devote to the research of lightweight model.