Abstract Visual-inertial Simultaneous Localization and Mapping (VI-SLAM) enhances mapping and localization accuracy by fusing visual and inertial constraints, but faces challenges in low-light environments where feature extraction degrades, leading to tracking failures. To address this issue, we improve VINS-Mono and propose a monocular VI-SLAM algorithm—GL-VINS. Firstly, we introduce an adaptive dark enhancement and brightness-guided fusion method to improve low-light image quality while avoiding local overexposure. Secondly, we propose the Fast-GL corner detector, which achieves precise feature extraction through adaptive thresholding and joint filtering strategies. Finally, Inertial Measurement Unit (IMU) pre-integration is integrated to guide KLT optical flow tracking, combined with non-blind image deblurring to handle motion blur, thereby enhancing tracking stability. Experiments demonstrate that the algorithm achieves maximum improvements of 46.41%, 19.11%, and 4.02% in SD, AG, and IE metrics, respectively, on the LOL dataset, and over 30% improvement in both average error and root mean square error on the DarkVision EuRoC dataset, where brightness was linearly adjusted based on the EuRoc dataset. Physical experiments further validate the superiority of the proposed algorithm.