Adverse weather conditions like rain, fog snow reduce visibility and degrade image quality, challenging the reliability of outdoor vision systems. Previous research mainly focuses on network models tailored to specific adverse weather conditions, limiting their effectiveness in addressing diverse weather scenarios in video processing. Recent research focuses on unified models for weather removal, significantly improving video quality in adverse conditions. However, the performance of these methods notably deteriorates in real environments due to the domain gap between synthetic and actual environments. In this paper, we present a meta-learning framework featuring a self-supervised learning (SSL) branch, aimed at boosting adaptability. In particular, we employ a two-stage training process. Initially, Joint training is implemented to establish a comprehensive model for weather reconstruction. Following this, Meta-BN training is applied to fine-tune the affine coefficients of the Batch Normalization (BN) layers, thus enabling the model to quickly adjust to different weather scenarios and maintain its efficacy in reconstruction. Moreover, an SSL-driven update strategy bolsters this targeted optimization, facilitating Test-time Weather Adaptation (TT-WA) and ensuring effective generalization to unfamiliar weather conditions. Experimental results across multiple benchmark datasets demonstrate that TT-WA consistently achieves state-of-the-art (SOTA) performance in both qualitative and quantitative evaluations under a variety of weather conditions, including rain, haze, and snow, outperforming existing methods. More critically, our approach exhibits robust adaptive reconstruction capabilities when applied to unseen real-world videos, further underscoring its effectiveness in generalizing to diverse and complex weather scenarios.