Multimodal image synthesis and editing: A survey and taxonomy
As information exists in various modalities in real world, effective interaction and fusion
among multimodal information plays a key role for the creation and perception of multimodal …
among multimodal information plays a key role for the creation and perception of multimodal …
A state-of-the-art review on image synthesis with generative adversarial networks
L Wang, W Chen, W Yang, F Bi, FR Yu - Ieee Access, 2020 - ieeexplore.ieee.org
Generative Adversarial Networks (GANs) have achieved impressive results in various image
synthesis tasks, and are becoming a hot topic in computer vision research because of the …
synthesis tasks, and are becoming a hot topic in computer vision research because of the …
Headnerf: A real-time nerf-based parametric head model
In this paper, we propose HeadNeRF, a novel NeRF-based parametric head model that
integrates the neural radiance field to the parametric representation of the human head. It …
integrates the neural radiance field to the parametric representation of the human head. It …
Generative diffusion prior for unified image restoration and enhancement
Existing image restoration methods mostly leverage the posterior distribution of natural
images. However, they often assume known degradation and also require supervised …
images. However, they often assume known degradation and also require supervised …
Collaborative diffusion for multi-modal face generation and editing
Diffusion models arise as a powerful generative tool recently. Despite the great progress,
existing diffusion models mainly focus on uni-modal control, ie, the diffusion process is …
existing diffusion models mainly focus on uni-modal control, ie, the diffusion process is …
Gan inversion: A survey
GAN inversion aims to invert a given image back into the latent space of a pretrained GAN
model so that the image can be faithfully reconstructed from the inverted code by the …
model so that the image can be faithfully reconstructed from the inverted code by the …
Stylespace analysis: Disentangled controls for stylegan image generation
Z Wu, D Lischinski… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
We explore and analyze the latent style space of StyleGAN2, a state-of-the-art architecture
for image generation, using models pretrained on several different datasets. We first show …
for image generation, using models pretrained on several different datasets. We first show …
Ad-nerf: Audio driven neural radiance fields for talking head synthesis
Generating high-fidelity talking head video by fitting with the input audio sequence is a
challenging problem that receives considerable attentions recently. In this paper, we …
challenging problem that receives considerable attentions recently. In this paper, we …
Styleflow: Attribute-conditioned exploration of stylegan-generated images using conditional continuous normalizing flows
High-quality, diverse, and photorealistic images can now be generated by unconditional
GANs (eg, StyleGAN). However, limited options exist to control the generation process using …
GANs (eg, StyleGAN). However, limited options exist to control the generation process using …
Ide-3d: Interactive disentangled editing for high-resolution 3d-aware portrait synthesis
Existing 3D-aware facial generation methods face a dilemma in quality versus editability:
they either generate editable results in low resolution, or high-quality ones with no editing …
they either generate editable results in low resolution, or high-quality ones with no editing …