Wonder3d: Single image to 3d using cross-domain diffusion
In this work we introduce Wonder3D a novel method for generating high-fidelity textured
meshes from single-view images with remarkable efficiency. Recent methods based on the …
meshes from single-view images with remarkable efficiency. Recent methods based on the …
Generative novel view synthesis with 3d-aware diffusion models
We present a diffusion-based model for 3D-aware generative novel view synthesis from as
few as a single input image. Our model samples from the distribution of possible renderings …
few as a single input image. Our model samples from the distribution of possible renderings …
Syncdreamer: Generating multiview-consistent images from a single-view image
In this paper, we present a novel diffusion model called that generates multiview-consistent
images from a single-view image. Using pretrained large-scale 2D diffusion models, recent …
images from a single-view image. Using pretrained large-scale 2D diffusion models, recent …
Diffusion with forward models: Solving stochastic inverse problems without direct supervision
Denoising diffusion models are a powerful type of generative models used to capture
complex distributions of real-world signals. However, their applicability is limited to …
complex distributions of real-world signals. However, their applicability is limited to …
Ego-exo4d: Understanding skilled human activity from first-and third-person perspectives
Abstract We present Ego-Exo4D a diverse large-scale multimodal multiview video dataset
and benchmark challenge. Ego-Exo4D centers around simultaneously-captured egocentric …
and benchmark challenge. Ego-Exo4D centers around simultaneously-captured egocentric …
Scenescape: Text-driven consistent scene generation
We present a method for text-driven perpetual view generation--synthesizing long-term
videos of various scenes solely, given an input text prompt describing the scene and camera …
videos of various scenes solely, given an input text prompt describing the scene and camera …
Viewdiff: 3d-consistent image generation with text-to-image models
Abstract 3D asset generation is getting massive amounts of attention inspired by the recent
success on text-guided 2D content creation. Existing text-to-3D methods use pretrained text …
success on text-guided 2D content creation. Existing text-to-3D methods use pretrained text …
Reconfusion: 3d reconstruction with diffusion priors
Abstract 3D reconstruction methods such as Neural Radiance Fields (NeRFs) excel at
rendering photorealistic novel views of complex scenes. However recovering a high-quality …
rendering photorealistic novel views of complex scenes. However recovering a high-quality …
Expressive text-to-image generation with rich text
Plain text has become a prevalent interface for text-to-image synthesis. However, its limited
customization options hinder users from accurately describing desired outputs. For example …
customization options hinder users from accurately describing desired outputs. For example …
Direct2. 5: Diverse text-to-3d generation via multi-view 2.5 d diffusion
Recent advances in generative AI have unveiled significant potential for the creation of 3D
content. However current methods either apply a pre-trained 2D diffusion model with the …
content. However current methods either apply a pre-trained 2D diffusion model with the …