计算机科学
人工智能
模式识别(心理学)
卷积神经网络
特征(语言学)
计算机视觉
编码器
分割
特征提取
算法
深度学习
作者
Yang Wang,Zhaochen Sun,Wei Zhao
出处
期刊:IEEE Geoscience and Remote Sensing Letters
[Institute of Electrical and Electronics Engineers]
日期:2021-07-01
卷期号:18 (7): 1159-1163
被引量:1
标识
DOI:10.1109/lgrs.2020.2998680
摘要
With the development of convolutional neural networks, the semantic segmentation of remote sensing images has been widely developed, but there are still some unsolved problems in this field due to the lack of multiscale information and the feature mismatch at the upsampling process. To solve these problems, we propose a network called multiscale feature fusion and alignment network (MFANet). MFANet is composed of an encoder and a decoder. The encoder contains a fully convolutional network, a multilevel feature fusion block (MLFFB), and a multiscale feature pyramid (MSFP). These subnetworks can obtain fine-grained feature maps that are full of multiscale and global features and improve segmentation results at multiple object scales. Moreover, MFANet uses a light convolution subnetwork, called decoder, to upsample the segmentation map stage by stage. Combining three scales of features, the decoder can promote the feature alignment at the upsampling stage. Along with the decoder, MFANet utilizes a multistage supervision loss to enhance the localization performance and boundary regression ability. Benefitting from the encoder and decoder structure and the innovative components inside encoder, MFANet is very powerful for the semantic segmentation of remote sensing images and can suit the complicated environment. We evaluate our MFANet on the Vaihingen and Potsdam data sets, and it outperforms the state-of-art methods both in the metric and visual effect.
科研通智能强力驱动
Strongly Powered by AbleSci AI