Multispectral remote sensing image (MSI) semantic segmentation faces challenges of limited labeled data and significant scene variability. Although domain adaptation (DA) and domain generalization (DG) methods alleviate these issues to some extent, they still have limitations. DA requires target domain (TD) data, and DG has limited task adaptability. The recently emerged segment anything model (SAM) demonstrates exceptional zero-shot generalization capabilities, yet its visible-light training data and interactive prompt requirements prevent direct application to MSI segmentation tasks. To address these challenges, this article proposes a single-source domain defect-aware adaptation and style-modulated generalization network (SDSnet), which integrates two key innovations: defect-aware prompt learning that automatically focuses on high-difficulty regions through entropy-based defect detection, and style generalization learning that enhances cross-domain adaptability via codebook-based style modulation. Through knowledge distillation, SDSnet enables efficient inference using only the base network, without additional computational overhead. Extensive experiments on three TDs demonstrate SDSnet's superiority over state-of-the-art DA, DG, and SAM-based methods. Code will be available at https://github.com/zhaoboyu34526/SDSnet.