高光谱成像
人工智能
计算机科学
基础(证据)
理解力
机器学习
地理
考古
程序设计语言
作者
Di Wang,Meiqi Hu,Jin Yao,Yuchun Miao,Jiaqi Yang,Yichu Xu,Xiaolei Qin,Jiaqi Ma,Lingyu Sun,Chenxing Li,Chuan Fu,Hongruixuan Chen,Chengxi Han,Naoto Yokoya,Jing Zhang,Minqiang Xu,Lin Liu,Lefei Zhang,Chen Wu,Bo Du
标识
DOI:10.1109/tpami.2025.3557581
摘要
Accurate hyperspectral image (HSI) interpretation is critical for providing valuable insights into various earth observation-related applications such as urban planning, precision agriculture, and environmental monitoring. However, existing HSI processing methods are predominantly task-specific and scene-dependent, which severely limits their ability to transfer knowledge across tasks and scenes, thereby reducing the practicality in real-world applications. To address these challenges, we present HyperSIGMA, a vision transformer-based foundation model that unifies HSI interpretation across tasks and scenes, scalable to over one billion parameters. To overcome the spectral and spatial redundancy inherent in HSIs, we introduce a novel sparse sampling attention (SSA) mechanism, which effectively promotes the learning of diverse contextual features and serves as the basic block of HyperSIGMA. HyperSIGMA integrates spatial and spectral features using a specially designed spectral enhancement module. In addition, we construct a large-scale hyperspectral dataset, HyperGlobal-450 K, for pre-training, which contains about 450 K hyperspectral images, significantly surpassing existing datasets in scale. Extensive experiments on various high-level and low-level HSI tasks demonstrate HyperSIGMA's versatility and superior representational capability compared to current state-of-the-art methods. Moreover, HyperSIGMA shows significant advantages in scalability, robustness, cross-modal transferring capability, real-world applicability, and computational efficiency. The code and models will be released at HyperSIGMA.
科研通智能强力驱动
Strongly Powered by AbleSci AI