土地覆盖
情态动词
计算机科学
边界(拓扑)
图像融合
遥感
算法
融合
激光雷达
人工智能
模式识别(心理学)
土地利用
地质学
数学
工程类
图像(数学)
材料科学
土木工程
数学分析
语言学
哲学
高分子化学
作者
Yanliang Zhang,Jingyu Wang,Baohua Zhang,Zining Han
标识
DOI:10.1117/1.jrs.18.044519
摘要
The semantic imbalance of class boundary areas is a key factor in decreasing the classification accuracy of the remote sensing land cover algorithm. We propose a multi-source remote sensing image semantic segmentation network based on multi-modal collaboration and boundary-guided fusion (BGF). The BGF module uses class boundary information as a restriction condition, embeds semantic alignment strategies into the encoder, and enhances the deep semantic features of each mode. On this basis, the boundary guidance strategy is used to assign different weights to the boundary and the internal area of the category to guide the feature fusion. Furthermore, to reduce the impact of multi-modal feature heterogeneity on feature fusion, a cross-modal collaborative fusion module is constructed to associate complementary information between multi-modal features and fully explore the collaborative relationship between multi-modal images from both spatial and channel domains. The comparative experiments were conducted with representative algorithms on the WHU-OPT-SAR data set. The experimental results show that the proposed method has increased the mean intersection over union and overall accuracy indicators by 3.3% and 2.2%, respectively, compared with MCANet, especially that the road category intersection and merger ratio has increased by 10.0% compared with MCANet. We proved the effectiveness of the proposed model.
科研通智能强力驱动
Strongly Powered by AbleSci AI