计算机科学
元数据
分割
自然语言处理
人工智能
情报检索
代表(政治)
图像分割
中间语言
语义学(计算机科学)
遥感
万维网
地质学
政治
编译程序
程序设计语言
法学
政治学
作者
Libo Wang,Sijun Dong,Ying Chen,Xiaoliang Meng,Shenghui Fang,Songlin Fei
标识
DOI:10.1109/tgrs.2024.3477548
摘要
Semantic segmentation of remote sensing images plays a vital role in a wide range of Earth Observation applications, such as land use land cover mapping, environment monitoring, and sustainable development.Driven by rapid developments in artificial intelligence, deep learning (DL) has emerged as the mainstream for semantic segmentation and has achieved many breakthroughs in the field of remote sensing.However, most DLbased methods focus on unimodal visual data while ignoring rich multimodal information involved in the real world.Non-visual data, such as text, can gather extra knowledge from the real world, which can strengthen the interpretability, reliability, and generalization of visual models.Inspired by this, we propose a novel metadata-collaborative segmentation network (MetaSegNet) that applies vision-language representation learning for semantic segmentation of remote sensing images.Unlike the common model structure that only uses unimodal visual data, we extract the key characteristic (e.g. the climate zone) from freely available remote sensing image metadata and transfer it into geographic text prompts via the generic ChatGPT.Then, we construct an image encoder, a text encoder, and a crossmodal attention fusion subnetwork to extract the image and text feature and apply imagetext interaction.Benefiting from such a design, the proposed MetaSegNet not only demonstrates superior generalization in zero-shot testing but also achieves competitive accuracy with the state-of-the-art semantic segmentation methods on the large-scale OpenEarthMap dataset (70.4% mIoU) and the Potsdam dataset (93.3% mean F1 score) as well as the LoveDA dataset (52.0%mIoU).
科研通智能强力驱动
Strongly Powered by AbleSci AI