摘要
ABSTRACT Real‐time object recognition is a significant field of research with numerous applications, including object tracking, video surveillance, and autonomous driving. This identifies the smallest bounding boxes that encompass the objects of interest within the input images. Nevertheless, these approaches face challenges, like limited support for quantization and suboptimal trade in achieving accurate object detection. To address these issues, a novel approach called Faster region‐based Convoluted Non‐monopolize search You Only Look Once neural architecture Search (FCN‐YOLOS) is introduced for object detection. This approach merges the advanced feature abstraction abilities of Faster R‐CNN with the efficient object recognition strengths of YOLOv8, enhanced by NAS optimization. YOLOv8 is employed for its rapid and accurate real‐time detection of abandoned items, while Faster R‐CNN contributes sophisticated feature extraction by utilizing statistical, grid, and Histogram of Oriented Optical Flow (HOOF) features to improve object representation and classification. Additionally, NAS optimizes hyperparameters by balancing exploration and exploitation, which helps minimize the loss function, reduce overfitting, and enhance generalization. This results in exceptional real‐time object detection performance within the FCN‐YOLOS framework. The proposed technique has demonstrated a maximum image of approximately 99%, 96.3%, 94.9%, and 95.2% concerning brightness realization compared to existing methods for accuracy, recall, precision, and F1 score, respectively. These outcomes highlight its extensive applicability across diverse object detection contexts, rendering it a compelling option for both academic and industrial research. Overall, the proposed approach for object recognition techniques in feature extraction and hyperparameter adjustments further improves evaluation in terms of efficiency and object detection accuracy.