Fast-COS: A Fast One-Stage Object Detector Based on Reparameterized Attention Vision Transformer for Autonomous Driving

变压器 探测器 计算机科学 人工智能 计算机视觉 对象(语法) 基于对象 实时计算 物理 工程类 电气工程 电压 电信
作者
Novendra Setyawan,Ghufron Wahyu Kurniawan,Chi‐Chia Sun,Wen‐Kai Kuo,Jun-Wei Hsieh
出处
期刊:Cornell University - arXiv
标识
DOI:10.48550/arxiv.2502.07417
摘要

The perception system is a a critical role of an autonomous driving system for ensuring safety. The driving scene perception system fundamentally represents an object detection task that requires achieving a balance between accuracy and processing speed. Many contemporary methods focus on improving detection accuracy but often overlook the importance of real-time detection capabilities when computational resources are limited. Thus, it is vital to investigate efficient object detection strategies for driving scenes. This paper introduces Fast-COS, a novel single-stage object detection framework crafted specifically for driving scene applications. The research initiates with an analysis of the backbone, considering both macro and micro architectural designs, yielding the Reparameterized Attention Vision Transformer (RAViT). RAViT utilizes Reparameterized Multi-Scale Depth-Wise Convolution (RepMSDW) and Reparameterized Self-Attention (RepSA) to enhance computational efficiency and feature extraction. In extensive tests across GPU, edge, and mobile platforms, RAViT achieves 81.4% Top-1 accuracy on the ImageNet-1K dataset, demonstrating significant throughput improvements over comparable backbone models such as ResNet, FastViT, RepViT, and EfficientFormer. Additionally, integrating RepMSDW into a feature pyramid network forms RepFPN, enabling fast and multi-scale feature fusion. Fast-COS enhances object detection in driving scenes, attaining an AP50 score of 57.2% on the BDD100K dataset and 80.0% on the TJU-DHD Traffic dataset. It surpasses leading models in efficiency, delivering up to 75.9% faster GPU inference and 1.38 higher throughput on edge devices compared to FCOS, YOLOF, and RetinaNet. These findings establish Fast-COS as a highly scalable and reliable solution suitable for real-time applications, especially in resource-limited environments like autonomous driving systems

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
XHC发布了新的文献求助10
1秒前
1秒前
1秒前
2秒前
酷炫抽屉完成签到 ,获得积分10
2秒前
2秒前
Mic应助小鱼仙倌z采纳,获得10
3秒前
田田发布了新的文献求助10
3秒前
Owen应助科研小白采纳,获得30
4秒前
4秒前
整齐笑晴发布了新的文献求助10
5秒前
赘婿应助凶狠的绮波采纳,获得10
5秒前
John发布了新的文献求助10
5秒前
5秒前
YYY发布了新的文献求助10
5秒前
希zi发布了新的文献求助10
5秒前
斯文败类应助zzz采纳,获得30
6秒前
6秒前
小伍同学发布了新的文献求助10
6秒前
hufan2441完成签到,获得积分10
7秒前
Cr20020711完成签到,获得积分10
7秒前
shiqi1108完成签到 ,获得积分10
7秒前
加菲丰丰完成签到,获得积分0
8秒前
8秒前
勤奋天真完成签到 ,获得积分10
8秒前
Alice完成签到 ,获得积分10
9秒前
9秒前
帅气到爆炸的我完成签到,获得积分10
9秒前
9秒前
LHHH发布了新的文献求助10
11秒前
11秒前
Orange应助wei采纳,获得10
11秒前
12秒前
Owen应助科研小农民采纳,获得10
12秒前
王思婷发布了新的文献求助10
12秒前
13秒前
13秒前
孤独梦安发布了新的文献求助10
13秒前
Wu发布了新的文献求助10
13秒前
慕青应助XYN1采纳,获得10
14秒前
高分求助中
(禁止应助)【重要!!请各位详细阅读】【科研通的精品贴汇总】 10000
Semantics for Latin: An Introduction 1099
Biology of the Indian Stingless Bee: Tetragonula iridipennis Smith 1000
Robot-supported joining of reinforcement textiles with one-sided sewing heads 780
Logical form: From GB to Minimalism 500
2024-2030年中国石英材料行业市场竞争现状及未来趋势研判报告 500
镇江南郊八公洞林区鸟类生态位研究 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 4150762
求助须知:如何正确求助?哪些是违规求助? 3686847
关于积分的说明 11647282
捐赠科研通 3380065
什么是DOI,文献DOI怎么找? 1854886
邀请新用户注册赠送积分活动 916829
科研通“疑难数据库(出版商)”最低求助积分说明 830656