计算机科学
计算机视觉
扩散
计算机图形学(图像)
人工智能
可视化
估计
物理
管理
经济
热力学
作者
Sam Z. Shen,Zhongyun Bao,Wenju Xu,Chunxia Xiao
标识
DOI:10.1109/tvcg.2025.3553853
摘要
Illumination estimation from a single indoor image is a promising yet challenging task. Existing indoor illumination estimation methods mainly regress lighting parameters or infer a panorama from a limited field-of-view image. Nevertheless, these methods fail to recover a panorama with both well-distributed illumination and detailed environment textures, leading to a lack of realism in rendering the embedded 3D objects with complex materials. This paper presents a novel multi-stage illumination estimation framework named IllumiDiff. Specifically, in Stage I, we first estimate illumination conditions from the input image, including the illumination distribution as well as the environmental texture of the scene. In Stage II, guided by the estimated illumination conditions, we design a conditional panoramic texture diffusion model to generate a high-quality LDR panorama. In Stage III, we leverage the illumination conditions to further reconstruct the LDR panorama to an HDR panorama. Extensive experiments demonstrate that our IllumiDiff can generate an HDR panorama with realistic illumination distribution and rich texture details from a single limited field-of-view indoor image. The generated panorama can produce impressive rendering results for the embedded 3D objects with various materials.
科研通智能强力驱动
Strongly Powered by AbleSci AI