Skip to content

Latest commit

 

History

History
7 lines (5 loc) · 3.32 KB

2501.02690.md

File metadata and controls

7 lines (5 loc) · 3.32 KB

GS-DiT: Advancing Video Generation with Pseudo 4D Gaussian Fields through Efficient Dense 3D Point Tracking

4D video control is essential in video generation as it enables the use of sophisticated lens techniques, such as multi-camera shooting and dolly zoom, which are currently unsupported by existing methods. Training a video Diffusion Transformer (DiT) directly to control 4D content requires expensive multi-view videos. Inspired by Monocular Dynamic novel View Synthesis (MDVS) that optimizes a 4D representation and renders videos according to different 4D elements, such as camera pose and object motion editing, we bring pseudo 4D Gaussian fields to video generation. Specifically, we propose a novel framework that constructs a pseudo 4D Gaussian field with dense 3D point tracking and renders the Gaussian field for all video frames. Then we finetune a pretrained DiT to generate videos following the guidance of the rendered video, dubbed as GS-DiT. To boost the training of the GS-DiT, we also propose an efficient Dense 3D Point Tracking (D3D-PT) method for the pseudo 4D Gaussian field construction. Our D3D-PT outperforms SpatialTracker, the state-of-the-art sparse 3D point tracking method, in accuracy and accelerates the inference speed by two orders of magnitude. During the inference stage, GS-DiT can generate videos with the same dynamic content while adhering to different camera parameters, addressing a significant limitation of current video generation models. GS-DiT demonstrates strong generalization capabilities and extends the 4D controllability of Gaussian splatting to video generation beyond just camera poses. It supports advanced cinematic effects through the manipulation of the Gaussian field and camera intrinsics, making it a powerful tool for creative video production.

4D 视频控制在视频生成中至关重要,因为它能够实现复杂的镜头技术,例如多机位拍摄和缩放镜头(dolly zoom),而现有方法尚不支持这些功能。直接训练一个视频扩散变换器(Video Diffusion Transformer, DiT)来控制 4D 内容需要耗费大量多视角视频资源。受到单目动态新颖视图合成(Monocular Dynamic novel View Synthesis, MDVS)的启发,我们将伪 4D 高斯场引入到视频生成中。MDVS 优化了 4D 表示并根据不同的 4D 元素(如相机姿态和对象运动编辑)渲染视频。 具体来说,我们提出了一种新的框架,通过稠密的 3D 点跟踪构建伪 4D 高斯场,并渲染高斯场以生成所有视频帧。然后,我们微调一个预训练的 DiT,以生成遵循渲染视频引导的视频,称为 GS-DiT。为提升 GS-DiT 的训练效果,我们还提出了一种高效的稠密 3D 点跟踪方法(Dense 3D Point Tracking, D3D-PT),用于伪 4D 高斯场的构建。我们的 D3D-PT 方法在准确性上优于现有最先进的稀疏 3D 点跟踪方法 SpatialTracker,并将推理速度提升了两个数量级。 在推理阶段,GS-DiT 能够生成动态内容相同但相机参数不同的视频,从而解决了当前视频生成模型的一大局限性。GS-DiT 展现出了强大的泛化能力,并将高斯点的 4D 可控性扩展到视频生成领域,不再局限于相机姿态的调整。通过操控高斯场和相机内参数,GS-DiT 支持高级的电影效果,是一款功能强大的创意视频制作工具。