Make-it-3d: High-fidelity 3d creation from a single image with diffusion prior

J Tang, T Wang, B Zhang, T Zhang… - Proceedings of the …, 2023 - openaccess.thecvf.com
In this work, we investigate the problem of creating high-fidelity 3D content from only a single
image. This is inherently challenging: it essentially involves estimating the underlying 3D …

Dynibar: Neural dynamic image-based rendering

Z Li, Q Wang, F Cole, R Tucker… - Proceedings of the …, 2023 - openaccess.thecvf.com
We address the problem of synthesizing novel views from a monocular video depicting a
complex dynamic scene. State-of-the-art methods based on temporally varying Neural …

Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction

C Sun, M Sun, HT Chen - … of the IEEE/CVF conference on …, 2022 - openaccess.thecvf.com
We present a super-fast convergence approach to reconstructing the per-scene radiance
field from a set of images that capture the scene with known poses. This task, which is often …

Ref-nerf: Structured view-dependent appearance for neural radiance fields

D Verbin, P Hedman, B Mildenhall… - 2022 IEEE/CVF …, 2022 - ieeexplore.ieee.org
Neural Radiance Fields (NeRF) is a popular view synthesis technique that represents a
scene as a continuous volumetric function, parameterized by multilayer perceptrons that …

F2-nerf: Fast neural radiance field training with free camera trajectories

P Wang, Y Liu, Z Chen, L Liu, Z Liu… - Proceedings of the …, 2023 - openaccess.thecvf.com
This paper presents a novel grid-based NeRF called F^ 2-NeRF (Fast-Free-NeRF) for novel
view synthesis, which enables arbitrary input camera trajectories and only costs a few …

pixelsplat: 3d gaussian splats from image pairs for scalable generalizable 3d reconstruction

D Charatan, SL Li, A Tagliasacchi… - Proceedings of the …, 2024 - openaccess.thecvf.com
We introduce pixelSplat a feed-forward model that learns to reconstruct 3D radiance fields
parameterized by 3D Gaussian primitives from pairs of images. Our model features real-time …

Dense depth priors for neural radiance fields from sparse input views

B Roessle, JT Barron, B Mildenhall… - Proceedings of the …, 2022 - openaccess.thecvf.com
Neural radiance fields (NeRF) encode a scene into a neural representation that enables
photo-realistic rendering of novel views. However, a successful reconstruction from RGB …

Robust dynamic radiance fields

YL Liu, C Gao, A Meuleman… - Proceedings of the …, 2023 - openaccess.thecvf.com
Dynamic radiance field reconstruction methods aim to model the time-varying structure and
appearance of a dynamic scene. Existing methods, however, assume that accurate camera …

Urban radiance fields

K Rematas, A Liu, PP Srinivasan… - Proceedings of the …, 2022 - openaccess.thecvf.com
The goal of this work is to perform 3D reconstruction and novel view synthesis from data
captured by scanning platforms commonly deployed for world mapping in urban outdoor …

Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields

JT Barron, B Mildenhall, M Tancik… - Proceedings of the …, 2021 - openaccess.thecvf.com
The rendering procedure used by neural radiance fields (NeRF) samples a scene with a
single ray per pixel and may therefore produce renderings that are excessively blurred or …