This study presents LEAD-YOLO, a YOLOv5 variant optimized for ship detection in synthetic aperture radar (SAR) imagery and tailored for edge computing devices. SAR imagery is crucial for maritime surveillance owing to its all-weather capability and low dependence on adverse weather conditions. However, ship detection using SAR imaging faces dual challenges: accuracy and real-time processing. Numerous factors impede accurate ship detection in SAR imagery, such as ocean wave noise, ship size and orientation variations, and near-shore landmass radar reflections. Furthermore, the need for rapid decision-making in maritime emergencies necessitates efficient, real-time vessel detection. LEAD-YOLO addresses these challenges by integrating FasterNet to reduce model complexity, utilizing the Receptive-Field Attention Convolutional Operation (RFCBAMConv) for improved feature representation, and incorporating Coordinate Attention into the C3 block (C3_CA) to enhance spatial feature encoding. This method strikes a balance between computational efficiency and model complexity. Experimental data from the SSDD, HRSID, and SAR-ship datasets indicate that LEAD-YOLO reduces parameter count by 55.35% and increases detection frame rate by 57.6% compared to YOLOv5s. We compared our method with other leading vessel detection methods in SAR imagery. The results demonstrate that despite reduced complexity, our method outperforms most in average precision (AP), highlighting its effectiveness and practicality.