编解码器
计算机科学
高斯分布
计算机图形学(图像)
计算机硬件
物理
量子力学
作者
J. Z. Li,Chen Cao,Gabriel Schwartz,Rawal Khirodkar,Christian Richardt,Tomas Simon,Yaser Sheikh,Shunsuke Saito
标识
DOI:10.1145/3680528.3687653
摘要
We present a new approach to creating photorealistic and relightable head avatars from a phone scan with unknown illumination. The reconstructed avatars can be animated and relit in real time with the global illumination of diverse environments. Unlike existing approaches that estimate parametric reflectance parameters via inverse rendering, our approach directly models learnable radiance transfer that incorporates global light transport in an efficient manner for real-time rendering. However, learning such a complex light transport that can generalize across identities is non-trivial. A phone scan in a single environment lacks sufficient information to infer how the head would appear in general environments. To address this, we build a universal relightable avatar model represented by 3D Gaussians. We train on hundreds of high-quality multi-view human scans with controllable point lights. High-resolution geometric guidance further enhances the reconstruction accuracy and generalization. Once trained, we finetune the pretrained model on a phone scan using inverse rendering to obtain a personalized relightable avatar. Our experiments establish the efficacy of our design, outperforming existing approaches while retaining real-time rendering capability.
科研通智能强力驱动
Strongly Powered by AbleSci AI