State of the art on diffusion models for visual computing
The field of visual computing is rapidly advancing due to the emergence of generative
artificial intelligence (AI), which unlocks unprecedented capabilities for the generation …
artificial intelligence (AI), which unlocks unprecedented capabilities for the generation …
Wonder3d: Single image to 3d using cross-domain diffusion
In this work we introduce Wonder3D a novel method for generating high-fidelity textured
meshes from single-view images with remarkable efficiency. Recent methods based on the …
meshes from single-view images with remarkable efficiency. Recent methods based on the …
Mvdream: Multi-view diffusion for 3d generation
We propose MVDream, a multi-view diffusion model that is able to generate geometrically
consistent multi-view images from a given text prompt. By leveraging image diffusion models …
consistent multi-view images from a given text prompt. By leveraging image diffusion models …
Syncdreamer: Generating multiview-consistent images from a single-view image
In this paper, we present a novel diffusion model called that generates multiview-consistent
images from a single-view image. Using pretrained large-scale 2D diffusion models, recent …
images from a single-view image. Using pretrained large-scale 2D diffusion models, recent …
Dragdiffusion: Harnessing diffusion models for interactive point-based image editing
Accurate and controllable image editing is a challenging task that has attracted significant
attention recently. Notably DragGAN developed by Pan et al.(2023) is an interactive point …
attention recently. Notably DragGAN developed by Pan et al.(2023) is an interactive point …
Diffusion model as representation learner
Abstract Diffusion Probabilistic Models (DPMs) have recently demonstrated impressive
results on various generative tasks. Despite its promises, the learned representations of pre …
results on various generative tasks. Despite its promises, the learned representations of pre …
Dragondiffusion: Enabling drag-style manipulation on diffusion models
Despite the ability of existing large-scale text-to-image (T2I) models to generate high-quality
images from detailed textual descriptions, they often lack the ability to precisely edit the …
images from detailed textual descriptions, they often lack the ability to precisely edit the …
Mvdiffusion++: A dense high-resolution multi-view diffusion model for single or sparse-view 3d object reconstruction
This paper presents a neural architecture MVDiffusion++ for 3D object reconstruction that
synthesizes dense and high-resolution views of an object given one or a few images without …
synthesizes dense and high-resolution views of an object given one or a few images without …
Probing the 3d awareness of visual foundation models
Recent advances in large-scale pretraining have yielded visual foundation models with
strong capabilities. Not only can recent models generalize to arbitrary images for their …
strong capabilities. Not only can recent models generalize to arbitrary images for their …
Scenetex: High-quality texture synthesis for indoor scenes via diffusion priors
We propose SceneTex a novel method for effectively generating high-quality and style-
consistent textures for indoor scenes using depth-to-image diffusion priors. Unlike previous …
consistent textures for indoor scenes using depth-to-image diffusion priors. Unlike previous …