Fabric defect detection is a crucial step in ensuring product quality within the textile industry. However, current detection methods face challenges in processing efficiency for high-resolution images, detail recovery during upsampling, and the adaptability of loss functions for low-quality samples, which limit detection accuracy and real-time performance. To overcome these limitations, this paper introduces an improved YOLOv8-based model that optimizes both aspects for fabric defect detection. First, we introduce an efficient RG-C2f module to improve processing speed for high-resolution images. Second, the DySample upsampling operator is adopted to enhance edge and detail preservation, improving detail recovery within defect regions. Finally, an adaptive inner-WIoU loss function is designed to dynamically adjust focus on low-quality samples, thereby strengthening the model’s generalization capability. Experimental results validated on the TILDA and Tianchi datasets show that, compared with YOLOv8, the proposed model achieves mAP improvements of 6.4% and 1.5%, respectively, demonstrating significant enhancements in detection accuracy and speed. This advancement provides strong support for fabric defect detection.