[PDF][PDF] Instantsplat: Unbounded sparse-view pose-free gaussian splatting in 40 seconds
While novel view synthesis (NVS) has made substantial progress in 3D computer vision, it
typically requires an initial estimation of camera intrinsics and extrinsics from dense …
typically requires an initial estimation of camera intrinsics and extrinsics from dense …
Viewcrafter: Taming video diffusion models for high-fidelity novel view synthesis
Despite recent advancements in neural 3D reconstruction, the dependence on dense multi-
view captures restricts their broader applicability. In this work, we propose\textbf …
view captures restricts their broader applicability. In this work, we propose\textbf …
Megascenes: Scene-level view synthesis at scale
Scene-level novel view synthesis (NVS) is fundamental to many vision and graphics
applications. Recently, pose-conditioned diffusion models have led to significant progress …
applications. Recently, pose-conditioned diffusion models have led to significant progress …
Compgs: Smaller and faster gaussian splatting with vector quantization
KL Navaneet, K Pourahmadi Meibodi… - … on Computer Vision, 2025 - Springer
Abstract 3D Gaussian Splatting (3DGS) is a new method for modeling and rendering 3D
radiance fields that achieves much faster learning and rendering time compared to SOTA …
radiance fields that achieves much faster learning and rendering time compared to SOTA …
No pose, no problem: Surprisingly simple 3d gaussian splats from sparse unposed images
We introduce NoPoSplat, a feed-forward model capable of reconstructing 3D scenes
parameterized by 3D Gaussians from\textit {unposed} sparse multi-view images. Our model …
parameterized by 3D Gaussians from\textit {unposed} sparse multi-view images. Our model …
Mvsplat360: Feed-forward 360 scene synthesis from sparse views
We introduce MVSplat360, a feed-forward approach for 360 {\deg} novel view synthesis
(NVS) of diverse real-world scenes, using only sparse observations. This setting is …
(NVS) of diverse real-world scenes, using only sparse observations. This setting is …
3dgs-enhancer: Enhancing unbounded 3d gaussian splatting with view-consistent 2d diffusion priors
Novel-view synthesis aims to generate novel views of a scene from multiple input images or
videos, and recent advancements like 3D Gaussian splatting (3DGS) have achieved notable …
videos, and recent advancements like 3D Gaussian splatting (3DGS) have achieved notable …
Dimensionx: Create any 3d and 4d scenes from a single image with controllable video diffusion
In this paper, we introduce\textbf {DimensionX}, a framework designed to generate
photorealistic 3D and 4D scenes from just a single image with video diffusion. Our approach …
photorealistic 3D and 4D scenes from just a single image with video diffusion. Our approach …
AnimateAnything: Consistent and Controllable Animation for Video Generation
G Lei, C Wang, H Li, R Zhang, Y Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
We present a unified controllable video generation approach AnimateAnything that
facilitates precise and consistent video manipulation across various conditions, including …
facilitates precise and consistent video manipulation across various conditions, including …
Point Cloud Matters: Rethinking the Impact of Different Observation Spaces on Robot Learning
In this study, we explore the influence of different observation spaces on robot learning,
focusing on three predominant modalities: RGB, RGB-D, and point cloud. Through extensive …
focusing on three predominant modalities: RGB, RGB-D, and point cloud. Through extensive …