This study presents a comprehensive evaluation of recent YOLO architectures, YOLOv8s, YOLOv9s, YOLOv10s, YOLO11s, and YOLO12s, for the detection of red, yellow, and purple raspberry fruits under field conditions. Images were collected using an smartphone camera under varying illumination, weather, and occlusion conditions. Each model was trained and evaluated using standard object detection metrics (Precision, Recall, mAP50, mAP50:95, F1-score), while inference performance was benchmarked on both high-performance (NVIDIA RTX 5080) and embedded (NVIDIA Jetson Orin NX) platforms. All models achieved high and consistent detection accuracy across fruits of different colors, confirming the robustness of the YOLO algorithm design. Compact variants provided the best trade-off between accuracy and computational cost, whereas deeper architectures yielded marginal improvements at higher Latency. TensorRT optimization on the Jetson device further enhanced real-time inference, particularly for embedded deployment. The results indicate that modern YOLO architectures have reached a level of architectural maturity, where advances are driven by optimization and specialization rather than structural redesign. These findings underline the strong potential of YOLO-based detectors as core components of intelligent, edge-deployable systems for precision agriculture and automated fruit detection.