计算机科学
域适应
领域(数学分析)
人工智能
目标检测
计算机视觉
正规化(语言学)
对象(语法)
领域工程
视觉对象识别的认知神经科学
模式识别(心理学)
分类器(UML)
数学
软件
软件系统
数学分析
基于构件的软件工程
程序设计语言
作者
Guofa Li,Zefeng Ji,Xingda Qu,Rui Zhou,Dongpu Cao
标识
DOI:10.1109/tiv.2022.3165353
摘要
Supervised object detection models based on deep learning technologies cannot perform well in domain shift scenarios where annotated data for training is always insufficient. To this end, domain adaptation technologies for knowledge transfer have emerged to handle the domain shift problems. A stepwise domain adaptive YOLO (S-DAYOLO) framework is developed which constructs an auxiliary domain to bridge the domain gap and uses a new domain adaptive YOLO (DAYOLO) in cross-domain object detection tasks. Different from the previous solutions, the auxiliary domain is composed of original source images and synthetic images that are translated from source images to the similar ones in the target domain. DAYOLO based on YOLOv5s is designed with a category-consistent regularization module and adaptation modules for image-level and instance-level features to generate domain invariant representations. Our proposed method is trained and evaluated by using five public driving datasets including Cityscapes, Foggy Cityscapes, BDD100K, KITTI, and KAIST. Experiment results demonstrate that object detection performance is significantly improved when using our proposed method in various domain shift scenarios for autonomous driving applications.
科研通智能强力驱动
Strongly Powered by AbleSci AI