Self-supervised multi-frame depth estimation with visual-inertial pose transformer and monocular guidance

单眼 计算机科学 人工智能 惯性参考系 计算机视觉 惯性测量装置 姿势 极线几何 特征(语言学) 图像(数学) 语言学 哲学 物理 量子力学
作者
Xiang Wang,Haonan Luo,Zihang Wang,Jin Zheng,Xiao Bai
出处
期刊:Information Fusion [Elsevier BV]
卷期号:108: 102363-102363 被引量:9
标识
DOI:10.1016/j.inffus.2024.102363
摘要

Self-supervised monocular depth estimation has been a popular topic since it does not need labor-intensive depth ground truth collection. However, the accuracy of monocular network is limited as it can only utilize context provided in the single image, ignoring the geometric clues resided in videos. Most recently, multi-frame depth networks are introduced to the self-supervised depth learning framework to ameliorate monocular depth, which explicitly encode the geometric information via pairwise cost volume construction. In this paper, we address two main issues that affect the cost volume construction and thus the multi-frame depth estimation. First, camera pose estimation, which determines the epipolar geometry in cost volume construction but has rarely been addressed, is enhanced with additional inertial modality. Complementary visual and inertial modality are fused adaptively to provide accurate camera pose with a novel visual-inertial fusion Transformer, in which self-attention takes effect in visual-inertial feature interaction and cross-attention is utilized for task feature decoding and pose regression. Second, the monocular depth prior, which contains contextual information about the scene, is introduced to the multi-frame cost volume aggregation at the feature level. A novel monocular guided cost volume excitation module is proposed to adaptively modulate cost volume features and address possible matching ambiguity. With the proposed modules, a self-supervised multi-frame depth estimation network is presented, consisting of a monocular depth branch as prior, a camera pose branch integrating both visual and inertial modality, and a multi-frame depth branch producing the final depth with the aid of former two branches. Experimental results on the KITTI dataset show that our proposed method achieves notable performance boost on multi-frame depth estimation over the state-of-the-art competitors. Compared with ManyDepth and MOVEDepth, our method relatively improves depth accuracy by 9.2% and 5.3% on the KITTI dataset.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
英俊的铭应助htt采纳,获得30
1秒前
1秒前
周杰发布了新的文献求助10
1秒前
lijo完成签到,获得积分10
1秒前
1秒前
2秒前
共享精神应助arron采纳,获得30
2秒前
wanci应助醉熏的天薇采纳,获得10
2秒前
冷静白亦发布了新的文献求助10
2秒前
愉快的宛儿完成签到,获得积分10
2秒前
neiz完成签到,获得积分10
2秒前
2秒前
ccl发布了新的文献求助10
3秒前
DD完成签到,获得积分10
4秒前
5秒前
5秒前
惊天大幂幂完成签到,获得积分10
5秒前
6秒前
6秒前
iceeer发布了新的文献求助10
6秒前
6秒前
1121241发布了新的文献求助10
7秒前
魏娜完成签到,获得积分20
7秒前
xixi发布了新的文献求助10
7秒前
章念波完成签到,获得积分10
9秒前
消消消消气完成签到 ,获得积分10
9秒前
9秒前
木木发布了新的文献求助10
9秒前
结实断缘发布了新的文献求助10
10秒前
我是老大应助玄之又玄采纳,获得30
10秒前
李爱国应助ccl采纳,获得10
10秒前
gcl_wzf应助厚厚采纳,获得30
10秒前
断舍离发布了新的文献求助10
10秒前
橙子味的邱憨憨完成签到 ,获得积分10
10秒前
CipherSage应助Jack采纳,获得10
10秒前
小米儿丫丫完成签到,获得积分10
11秒前
mouxq发布了新的文献求助10
12秒前
一颗药顽完成签到,获得积分10
12秒前
13秒前
天真的雨完成签到,获得积分10
14秒前
高分求助中
Les Mantodea de Guyane Insecta, Polyneoptera 2500
One Man Talking: Selected Essays of Shao Xunmei, 1929–1939 (PDF!) 1000
Technologies supporting mass customization of apparel: A pilot project 450
Tip60 complex regulates eggshell formation and oviposition in the white-backed planthopper, providing effective targets for pest control 400
A Field Guide to the Amphibians and Reptiles of Madagascar - Frank Glaw and Miguel Vences - 3rd Edition 400
China Gadabouts: New Frontiers of Humanitarian Nursing, 1941–51 400
The Healthy Socialist Life in Maoist China, 1949–1980 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3789121
求助须知:如何正确求助?哪些是违规求助? 3334252
关于积分的说明 10268466
捐赠科研通 3050588
什么是DOI,文献DOI怎么找? 1674046
邀请新用户注册赠送积分活动 802471
科研通“疑难数据库(出版商)”最低求助积分说明 760621