核转染
渗滤
食欲不振
妊娠期
易熔合金
TSG101型
蛋白质基因组学
癫痫
关节软骨损伤
反射减退
杜瓦卢马布
液化
肾小管病变
三醋酸甘油酯
借口
心包积血
降职
蛋白质异构体
作者
Guo Xiang-yu,Wu, Zhanqian,Xiong Kaixin,Xu Ziyang,Zhou Li-jun,XU Gangwei,Xu ShaoQing,Sun Haiyang,Wang Bing,Chen Guang,Ye Hangjun,Liu, Wenyu,Wang Xing-gang
出处
期刊:Cornell University - arXiv
日期:2025-06-09
标识
DOI:10.48550/arxiv.2506.07497
摘要
We present Genesis, a unified framework for joint generation of multi-view driving videos and LiDAR sequences with spatio-temporal and cross-modal consistency. Genesis employs a two-stage architecture that integrates a DiT-based video diffusion model with 3D-VAE encoding, and a BEV-aware LiDAR generator with NeRF-based rendering and adaptive sampling. Both modalities are directly coupled through a shared latent space, enabling coherent evolution across visual and geometric domains. To guide the generation with structured semantics, we introduce DataCrafter, a captioning module built on vision-language models that provides scene-level and instance-level supervision. Extensive experiments on the nuScenes benchmark demonstrate that Genesis achieves state-of-the-art performance across video and LiDAR metrics (FVD 16.95, FID 4.24, Chamfer 0.611), and benefits downstream tasks including segmentation and 3D detection, validating the semantic fidelity and practical utility of the generated data.
科研通智能强力驱动
Strongly Powered by AbleSci AI