计算机科学
图形
突出
遥感
人工智能
计算机视觉
地质学
理论计算机科学
作者
Jie Liu,Jinpeng He,Huaixin Chen,Yang Ruoyu,Ying Huang
出处
期刊:Remote Sensing
[Multidisciplinary Digital Publishing Institute]
日期:2025-02-28
卷期号:17 (5): 861-861
摘要
In recent years, numerous advanced lightweight models have been proposed for salient object detection (SOD) in optical remote sensing images (ORSI). However, most methods still face challenges such as performance limitations and imbalances between accuracy and computational cost. To address these issues, we propose SggNet, a novel semantic- and graph-guided lightweight network for ORSI-SOD. The SggNet adopts a classical encoder-decoder structure with MobileNet-V2 as the backbone, ensuring optimal parameter utilization. Furthermore, we design an Efficient Global Perception Module (EGPM) to capture global feature relationships and semantic cues through limited computational costs, enhancing the model’s ability to perceive salient objects in complex scenarios, and a Semantic-Guided Edge Awareness Module (SEAM) that leverages the semantic consistency of deep features to suppress background noise in shallow features, accurately predict object boundaries, and preserve the detailed shapes of salient objects. To further efficiently aggregate multi-level features and preserve the integrity and complexity of overall object shape, we introduce a Graph-Based Region Awareness Module (GRAM). This module incorporates non-local operations under graph convolution domain to deeply explore high-order relationships between adjacent layers, while utilizing depth-wise separable convolution blocks to significantly reduce computational cost. Extensive quantitative and qualitative experiments demonstrate that the proposed model achieves excellent performance with only 2.70 M parameters and 1.38 G FLOPs, while delivering an impressive inference speed of 108 FPS, striking a balance between efficiency and accuracy to meet practical application needs.
科研通智能强力驱动
Strongly Powered by AbleSci AI