Lane detection is a fundamental task in autonomous driving, which lies in the real-time detection of lanes of streaming video during driving. We address the lack of temporal flow understanding of existing video lane detectors, propose a streaming video lane detection training framework, and focus on building a series of inter-frame temporal information conduction structures. Specifically, we propose the Deformable Spatio-Temporal Attention (DSTA) module, which accurately captures the instantaneous changing features and position shifts between frames and incorporates key information under different spatio-temporal conditions. Also, to maintain long-time memory at a very low computational cost, we design instance caches that suggest possible lanes for the current frame and resist short-time lane disappearance based on historical memory. We experimented with the inclusion of background category prediction, which is able to simply filter low-confidence false predictions of lanes, while also conveying a more holistic and uniform relationship between lanes and background to the model. These methods allow our model to achieve a significant lead in the video lane detection dataset VIL-100, reaching an accuracy of 94.9 at a speed of 39 FPS.