In recent years, reinforcement learning has received significant attention and has been widely applied to UAV autonomous navigation tasks. However, most existing studies assume that UAV operates in static environments, overlooking randomly occurring dynamic obstacles. Such obstacles are often difficult for conventional sensors to detect in a timely manner, which poses a serious threat to flight safety. To address autonomous navigation in dynamic environments, this paper introduces the Event-camera, a novel dynamic vision sensor, to capture environmental information with high dynamic range and microsecond-level temporal resolution. To efficiently process the sparse and asynchronous event stream generated by the Event-camera, we develop spiking reinforcement learning framework based on a spiking neural network, enabling low-latency and high-efficiency control and decision-making. Furthermore, inspired by the advances in biological neural dynamics, we propose a biologically plausible plasticity spiking threshold mechanism, which enables spiking neurons to dynamically adjust their firing thresholds in response to the mean membrane potential and depolarization rate. This mechanism enhances the robustness and adaptability of neural information encoding. Extensive experiments in multiple complex environments within the Airsim simulator demonstrate that the proposed method consistently outperforms baseline methods in dynamic environments across various objective evaluation metrics, achieving higher navigation success rates and flight speeds. Moreover, it maintains competitive performance in previously unknown environments, indicating a certain degree of generalization capability.