计算机科学
交通标志识别
人工智能
交通标志
恶劣天气
机器学习
培训(气象学)
过程(计算)
符号(数学)
气象学
数学
操作系统
物理
数学分析
摘要
Robust traffic sign detection and recognition under adverse weather conditions is a critical challenge for autonomous driving systems. This paper presents a novel approach that combines zero-shot learning with contrastive vision-language pre-training to enhance the resilience of traffic sign recognition systems against weather-induced visual impairments. Our method leverages a limited dataset to train a model capable of understanding and processing images degraded by various weather conditions such as rain, fog, and snow without direct exposure to these conditions during training. By integrating descriptive language data with visual cues, our model learns to identify and interpret traffic signs through a generalizable semantic embedding, facilitating robust detection and recognition across unseen weather scenarios. The framework employs a two-stage training process: the initial stage focuses on learning general visual features from minimally weather-affected images, while the subsequent stage enhances the model's ability to predict and adapt to weather-specific distortions using a novel zero-shot learning strategy. Experimental evaluations demonstrate superior performance over traditional methods, particularly in zero-shot scenarios where the model encounters completely novel weather conditions. This approach not only advances the field of image restoration in severe weather but also sets a new standard for the deployment of vision-based systems in real-world environments where variable weather is a common challenge.
科研通智能强力驱动
Strongly Powered by AbleSci AI