计算机科学
光辉
渲染(计算机图形)
计算机视觉
人工智能
计算机图形学(图像)
利用
遥感
计算机安全
地质学
作者
Weipeng Jing,S. Wang,Wenjun Zhang,Chao Li
标识
DOI:10.1109/tce.2023.3346870
摘要
With the rapid development of the metaverse, AR/VR technology and consumer electronics have emerged as crucial drivers for future virtual experiences. In this context, realistic 3D scene modeling and rendering has become particularly critical. Neural Radiance Fields (NeRF), as a deep learning-based method, have made significant progress in the reconstruction and rendering of real-world 3D scenes. However, NeRF still faces challenges in efficiently handling high-frequency information within the object, leading to issues such as blurry details and artifacts. To address this issue and provide more immersive virtual experiences, we propose Vivid-NeRF, which offers benefits to both metaverse and AR/VR technologies. Vivid-NeRF extracts information at different frequencies from the image and fully exploits the high-frequency information in the 3D feature generation process to obtain more realistic details. In addition, we propose frequency-based sampling to increase the sampling of high-frequency components. Finally, we merge frequency information with viewpoint features obtained through frequency-based sampling to enhance the model’s capability in expressing scene details. With these improvements, Vivid-NeRF significantly reduces surface blur and accurately captures and reproduces the smooth surface appearance. We conduct experiments on the Blender and Shiny-Blender datasets. Experimental results demonstrate that Vivid-NeRF achieves PSNR of 35.00 and 34.01, SSIM of 0.976 and 0.970, and LPIPS of 0.033 and 0.064. Both quantitative and qualitative assessments demonstrate that our approach outperforms the previous state-of-the-art (SOTA) ABLE-NeRF.
科研通智能强力驱动
Strongly Powered by AbleSci AI