Boosting Monocular 3D Object Detection With Object-Centric Auxiliary Depth Supervision

人工智能 深度图 计算机科学 目标检测 单眼 杠杆(统计) 计算机视觉 激光雷达 探测器 深度知觉 模式识别(心理学) 图像(数学) 遥感 地理 电信 生物 神经科学 感知
作者
Young-Seok Kim,Sanmin Kim,Sangmin Sim,Jun Won Choi,Dongsuk Kum
出处
期刊:IEEE Transactions on Intelligent Transportation Systems [Institute of Electrical and Electronics Engineers]
卷期号:: 1-13 被引量:11
标识
DOI:10.1109/tits.2022.3224082
摘要

Recent advances in monocular 3D detection leverage a depth estimation network explicitly as an intermediate stage of the 3D detection network. Depth map approaches yield more accurate depth to objects than other methods thanks to the depth estimation network trained on a large-scale dataset. However, depth map approaches can be limited by the accuracy of the depth map, and sequentially using two separated networks for depth estimation and 3D detection significantly increases computation cost and inference time. In this work, we propose a method to boost the RGB image-based 3D detector by jointly training the detection network with a depth prediction loss analogous to the depth estimation task. In this way, our 3D detection network can be supervised by more depth supervision from raw LiDAR points, which does not require any human annotation cost, to estimate accurate depth without explicitly predicting the depth map. Our novel object-centric depth prediction loss focuses on depth around foreground objects, which is important for 3D object detection, to leverage pixel-wise depth supervision in an object-centric manner. Our depth regression model is further trained to predict the uncertainty of depth to represent the 3D confidence of objects. To effectively train the 3D detector with raw LiDAR points and to enable end-to-end training, we revisit the regression target of 3D objects and design a network architecture. Extensive experiments on KITTI and nuScenes benchmarks show that our method can significantly boost the monocular image-based 3D detector to outperform depth map approaches while maintaining the real-time inference speed.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
小林神完成签到,获得积分10
刚刚
柠檬百香果完成签到,获得积分10
1秒前
DaSheng发布了新的文献求助10
1秒前
su完成签到 ,获得积分10
2秒前
321完成签到,获得积分10
2秒前
2秒前
3秒前
4秒前
Jerry完成签到,获得积分10
7秒前
访云发布了新的文献求助10
7秒前
xmz应助周小鱼采纳,获得10
8秒前
科研通AI5应助Tttting采纳,获得10
9秒前
321654完成签到,获得积分10
10秒前
海茵完成签到,获得积分10
10秒前
11秒前
大淘完成签到,获得积分10
12秒前
在水一方应助叶子采纳,获得10
14秒前
舟遥遥完成签到,获得积分10
14秒前
Mr兔仙森发布了新的文献求助20
15秒前
hu完成签到,获得积分10
15秒前
15秒前
17秒前
qi0625完成签到,获得积分10
18秒前
18秒前
一朵小兰花完成签到 ,获得积分10
18秒前
科研通AI2S应助科研通管家采纳,获得10
18秒前
iNk应助科研通管家采纳,获得10
18秒前
科研通AI2S应助科研通管家采纳,获得10
18秒前
18秒前
18秒前
19秒前
20秒前
wushuimei完成签到 ,获得积分10
21秒前
芜湖完成签到,获得积分10
21秒前
踏雪完成签到,获得积分10
21秒前
Parsifal发布了新的文献求助10
22秒前
23秒前
柠檬完成签到 ,获得积分10
23秒前
23秒前
哟梦完成签到,获得积分10
24秒前
高分求助中
【此为提示信息,请勿应助】请按要求发布求助,避免被关 20000
Les Mantodea de Guyane Insecta, Polyneoptera 2500
Computational Atomic Physics for Kilonova Ejecta and Astrophysical Plasmas 500
Technologies supporting mass customization of apparel: A pilot project 450
Brain and Heart The Triumphs and Struggles of a Pediatric Neurosurgeon 400
Cybersecurity Blueprint – Transitioning to Tech 400
Mixing the elements of mass customisation 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3782897
求助须知:如何正确求助?哪些是违规求助? 3328185
关于积分的说明 10235295
捐赠科研通 3043240
什么是DOI,文献DOI怎么找? 1670468
邀请新用户注册赠送积分活动 799718
科研通“疑难数据库(出版商)”最低求助积分说明 759033