Recent advancements in 3D object generation using diffusion models have achieved remarkable success, but generating realistic 3D urban scenes remains challenging. Existing methods relying solely on 3D diffusion models tend to suffer a degradation in appearance details, while those utilizing only 2D diffusion models typically compromise camera controllability. To overcome this limitation, we propose ScenDi, a method for urban scene generation that integrates both 3D and 2D diffusion models. We first train a 3D latent diffusion model to generate 3D Gaussians, enabling the rendering of images at a relatively low resolution. To enable controllable synthesis, this 3DGS generation process can be optionally conditioned by specifying inputs such as 3d bounding boxes, road maps, or text prompts. Then, we train a 2D video diffusion model to enhance appearance details conditioned on rendered images from the 3D Gaussians. By leveraging the coarse 3D scene as guidance for 2D video diffusion, ScenDi generates desired scenes based on input conditions and successfully adheres to accurate camera trajectories. Experiments on two challenging real-world datasets, Waymo and KITTI-360, demonstrate the effectiveness of our approach.
ScenDi leverages 3D and 2D diffusion cascades to generate high-quality urban scenes. Top: We first build a Voxel-to-3DGS VQ-VAE to reconstruct scenes in a feed-forward manner. The input is a colored voxel grid $\mathcal{V}$ constructed based on off-the-shelf metric depth estimator, whereas the output is a set of 3D Gaussian primitives $\mathcal{G}$. Then, we train a 3D diffusion model ${\epsilon}_{\theta}^{\text{3D}}$ on the latent space $\mathbf{z}^{\mathcal{V}}$ to generate coarse 3D scenes, optionally conditioning on signals such as road maps or text prompts to enable explicit control over the content. Bottom: Based on the coarse 3D scene, we train a 2D video diffusion model to refine foreground appearance details as well as generate distant areas. We achieve this by adopting video clip $\tilde{\mathcal{C}}$ rendered from generated 3DGS as 3D conditional signals to fine-tune a conditional 2D latent diffusion model ${\epsilon}_{\phi}^{\text{2D}}$.
ScenDi generates high-quality urban scenes using a 3D-to-2D Scene Diffusion cascade, with optional condition signals like text and layout for controllable 3D space generation.
Our method allows for rendering corresponding scene views from flexible camera trajectories, even though our training data primarily consists of forward-moving trajectories.
@article{
}