Vivid4D: Improving 4D Reconstruction from Monocular Video by Video Inpainting

1Zhejiang University 2ByteDance
corresponding author

TL;DR: Vivid4D addresses the challenge of reconstructing dynamic scenes from casual monocular videos with video diffusion model.

Abstract

Reconstructing 4D dynamic scenes from casually captured monocular videos is valuable but highly challenging, as each timestamp is observed from a single viewpoint. We introduce Vivid4D, a novel approach that enhances 4D monocular video synthesis by augmenting observation views — synthesizing multi-view videos from a monocular input. Unlike existing methods that either solely leverage geometric priors for supervision or use generative priors while overlooking geometry, we integrate both. This reformulates view augmentation as a video inpainting task, where observed views are warped into new viewpoints based on monocular depth priors. To achieve this, we train a video inpainting model on unposed web videos with synthetically generated masks that mimic warping occlusions, ensuring spatially and temporally consistent completion of missing regions. To further mitigate inaccuracies in monocular depth priors, we introduce an iterative view augmentation strategy and a robust reconstruction loss. Experiments demonstrate that our method effectively improves monocular 4D scene reconstruction and completion.

teaser

Illustration. We improve dynamic scene reconstruction from casually captured monocular videos by synthesizing augmented views. Our approach integrates both geometric and generative priors to reformulate the video augmentation as a video inpainting task. This enables our method to effectively complete invisible regions in the scene and enhance reconstruction quality.

Comparison Gallery

Pipeline

pipeline

Pipeline. Given an input monocular video, we first perform sparse reconstruction to obtain camera poses and align monocular depth to metric scale, forming an initial data buffer \(D_0\). In each iterative view augmentation step, we select frames at each timestamp from the previous buffer \(D_{j-1}\) and warp them to novel viewpoints using pre-defined camera poses \(T\), creating new perspective videos with continuous invisible region masks. These masked videos, along with binary masks and an anchor video, are fed into our pre-trained anchor-conditioned video inpainting diffusion model to produce completed novel-view videos. We update the buffer \(D_j\) with these enhanced videos, their metric depths and poses. Finally, both the original monocular video and all synthesized multi-view videos are used to supervise 4D scene reconstruction.

Citation


      @misc{huang2025vivid4dimproving4dreconstruction,
        title={Vivid4D: Improving 4D Reconstruction from Monocular Video by Video Inpainting}, 
        author={Jiaxin Huang and Sheng Miao and BangBang Yang and Yuewen Ma and Yiyi Liao},
        year={2025},
        eprint={2504.11092},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2504.11092}, 
      }