State of the art on diffusion models for visual computing
The field of visual computing is rapidly advancing due to the emergence of generative
artificial intelligence (AI), which unlocks unprecedented capabilities for the generation …
artificial intelligence (AI), which unlocks unprecedented capabilities for the generation …
One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization
Single image 3D reconstruction is an important but challenging task that requires extensive
knowledge of our natural world. Many existing methods solve this problem by optimizing a …
knowledge of our natural world. Many existing methods solve this problem by optimizing a …
Wonder3d: Single image to 3d using cross-domain diffusion
In this work we introduce Wonder3D a novel method for generating high-fidelity textured
meshes from single-view images with remarkable efficiency. Recent methods based on the …
meshes from single-view images with remarkable efficiency. Recent methods based on the …
Mvdream: Multi-view diffusion for 3d generation
We propose MVDream, a multi-view diffusion model that is able to generate geometrically
consistent multi-view images from a given text prompt. By leveraging image diffusion models …
consistent multi-view images from a given text prompt. By leveraging image diffusion models …
Text-to-3d using gaussian splatting
Automatic text-to-3D generation that combines Score Distillation Sampling (SDS) with the
optimization of volume rendering has achieved remarkable progress in synthesizing realistic …
optimization of volume rendering has achieved remarkable progress in synthesizing realistic …
One-2-3-45++: Fast single image to 3d objects with consistent multi-view generation and 3d diffusion
Recent advancements in open-world 3D object generation have been remarkable with
image-to-3D methods offering superior fine-grained control over their text-to-3D …
image-to-3D methods offering superior fine-grained control over their text-to-3D …
Syncdreamer: Generating multiview-consistent images from a single-view image
In this paper, we present a novel diffusion model called that generates multiview-consistent
images from a single-view image. Using pretrained large-scale 2D diffusion models, recent …
images from a single-view image. Using pretrained large-scale 2D diffusion models, recent …
Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors
We present Magic123, a two-stage coarse-to-fine approach for high-quality, textured 3D
meshes generation from a single unposed image in the wild using both2D and 3D priors. In …
meshes generation from a single unposed image in the wild using both2D and 3D priors. In …
Align your gaussians: Text-to-4d with dynamic 3d gaussians and composed diffusion models
Text-guided diffusion models have revolutionized image and video generation and have
also been successfully used for optimization-based 3D object synthesis. Here we instead …
also been successfully used for optimization-based 3D object synthesis. Here we instead …
Gaussiandreamer: Fast generation from text to 3d gaussians by bridging 2d and 3d diffusion models
In recent times the generation of 3D assets from text prompts has shown impressive results.
Both 2D and 3D diffusion models can help generate decent 3D objects based on prompts …
Both 2D and 3D diffusion models can help generate decent 3D objects based on prompts …