Towards Fully Mobile 3D Face, Body, and Environment Capture Using Only Head-worn Cameras

计算机科学 运动捕捉 计算机视觉 人工智能 卷积神经网络 虚拟现实 面子(社会学概念) 姿势 增强现实 参数统计 面部表情 三维重建 阿凡达 移动设备 计算机图形学(图像) 人机交互 运动(物理) 操作系统 社会科学 统计 数学 社会学
作者
Young-Woon Cha,True Price,Zhen Wei,Xinran Lu,Nicholas Rewkowski,Rohan Chabra,Zihe Qin,Hyounghun Kim,Zhaoqi Su,Yebin Liu,Adrian Ilie,Andrei State,Zhenlin Xu,Jan‐Michael Frahm,Henry Fuchs
出处
期刊:IEEE Transactions on Visualization and Computer Graphics [Institute of Electrical and Electronics Engineers]
卷期号:24 (11): 2993-3004 被引量:54
标识
DOI:10.1109/tvcg.2018.2868527
摘要

We propose a new approach for 3D reconstruction of dynamic indoor and outdoor scenes in everyday environments, leveraging only cameras worn by a user. This approach allows 3D reconstruction of experiences at any location and virtual tours from anywhere. The key innovation of the proposed ego-centric reconstruction system is to capture the wearer's body pose and facial expression from near-body views, e.g. cameras on the user's glasses, and to capture the surrounding environment using outward-facing views. The main challenge of the ego-centric reconstruction, however, is the poor coverage of the near-body views - that is, the user's body and face are observed from vantage points that are convenient for wear but inconvenient for capture. To overcome these challenges, we propose a parametric-model-based approach to user motion estimation. This approach utilizes convolutional neural networks (CNNs) for near-view body pose estimation, and we introduce a CNN-based approach for facial expression estimation that combines audio and video. For each time-point during capture, the intermediate model-based reconstructions from these systems are used to re-target a high-fidelity pre-scanned model of the user. We demonstrate that the proposed self-sufficient, head-worn capture system is capable of reconstructing the wearer's movements and their surrounding environment in both indoor and outdoor situations without any additional views. As a proof of concept, we show how the resulting 3D-plus-time reconstruction can be immersively experienced within a virtual reality system (e.g., the HTC Vive). We expect that the size of the proposed egocentric capture-and-reconstruction system will eventually be reduced to fit within future AR glasses, and will be widely useful for immersive 3D telepresence, virtual tours, and general use-anywhere 3D content creation.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
香蕉觅云应助CapX采纳,获得10
刚刚
SMY完成签到,获得积分20
1秒前
搜集达人应助Rita采纳,获得10
1秒前
不忘初心发布了新的文献求助10
2秒前
大个应助ZJH采纳,获得10
2秒前
脑洞疼应助ZJH采纳,获得10
2秒前
Akim应助ZJH采纳,获得10
2秒前
隐形曼青应助ZJH采纳,获得10
2秒前
Ding_RJ发布了新的文献求助10
2秒前
3秒前
卓伊晨发布了新的文献求助10
3秒前
深情安青应助白桥采纳,获得10
3秒前
4秒前
李健的粉丝团团长应助SMY采纳,获得10
5秒前
ZZQ完成签到,获得积分10
5秒前
DeepLearning完成签到,获得积分10
6秒前
Live完成签到,获得积分10
6秒前
风清扬发布了新的文献求助10
7秒前
7秒前
7秒前
我的小羊发布了新的文献求助10
8秒前
时尚的凝丝完成签到 ,获得积分10
8秒前
8秒前
坚忍完成签到,获得积分10
9秒前
9秒前
10秒前
Ava应助纯真含双采纳,获得10
11秒前
11秒前
xin完成签到 ,获得积分10
12秒前
Eris发布了新的文献求助10
12秒前
Ding_RJ完成签到,获得积分20
12秒前
Hz完成签到,获得积分10
13秒前
忄动发布了新的文献求助10
14秒前
14秒前
小蘑菇应助shaw采纳,获得10
14秒前
蓝天发布了新的文献求助10
14秒前
宋祥瑞完成签到,获得积分10
16秒前
留胡子的靖儿完成签到,获得积分10
16秒前
诺诺发布了新的文献求助10
18秒前
18秒前
高分求助中
The Wiley Blackwell Companion to Diachronic and Historical Linguistics 3000
HANDBOOK OF CHEMISTRY AND PHYSICS 106th edition 1000
ASPEN Adult Nutrition Support Core Curriculum, Fourth Edition 1000
Decentring Leadership 800
Signals, Systems, and Signal Processing 610
脑电大模型与情感脑机接口研究--郑伟龙 500
Genera Orchidacearum Volume 4: Epidendroideae, Part 1 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6288788
求助须知:如何正确求助?哪些是违规求助? 8107342
关于积分的说明 16960048
捐赠科研通 5353654
什么是DOI,文献DOI怎么找? 2844835
邀请新用户注册赠送积分活动 1822114
关于科研通互助平台的介绍 1678156