Listen, denoise, action! audio-driven motion synthesis with diffusion models
Diffusion models have experienced a surge of interest as highly expressive yet efficiently
trainable probabilistic models. We show that these models are an excellent fit for …
trainable probabilistic models. We show that these models are an excellent fit for …
Human motion diffusion as a generative prior
Recent work has demonstrated the significant potential of denoising diffusion models for
generating human motion, including text-to-motion capabilities. However, these methods are …
generating human motion, including text-to-motion capabilities. However, these methods are …
Clipface: Text-guided editing of textured 3d morphable models
We propose ClipFace, a novel self-supervised approach for text-guided editing of textured
3D morphable model of faces. Specifically, we employ user-friendly language prompts to …
3D morphable model of faces. Specifically, we employ user-friendly language prompts to …
Single motion diffusion
Synthesizing realistic animations of humans, animals, and even imaginary creatures, has
long been a goal for artists and computer graphics professionals. Compared to the imaging …
long been a goal for artists and computer graphics professionals. Compared to the imaging …
Moconvq: Unified physics-based motion control via scalable discrete representations
In this work, we present MoConVQ, a novel unified framework for physics-based motion
control leveraging scalable discrete representations. Building upon vector quantized …
control leveraging scalable discrete representations. Building upon vector quantized …
MotionFix: Text-driven 3d human motion editing
N Athanasiou, A Cseke, M Diomataris… - SIGGRAPH Asia 2024 …, 2024 - dl.acm.org
The focus of this paper is 3D motion editing. Given a 3D human motion and a textual
description of the desired modification, our goal is to generate an edited motion as …
description of the desired modification, our goal is to generate an edited motion as …
Synthesizing long-term human motions with diffusion models via coherent sampling
Text-to-motion generation has gained increasing attention, but most existing methods are
limited to generating short-term motions that correspond to a single sentence describing a …
limited to generating short-term motions that correspond to a single sentence describing a …
Monkey see, monkey do: Harnessing self-attention in motion diffusion for zero-shot motion transfer
Given the remarkable results of motion synthesis with diffusion models, a natural question
arises: how can we effectively leverage these models for motion editing? Existing diffusion …
arises: how can we effectively leverage these models for motion editing? Existing diffusion …
[PDF][PDF] Ai-generated content (aigc) for various data modalities: A survey
Amidst the rapid advancement of artificial intelligence (AI), the development of content
generation techniques stands out as one of the most captivating and widely discussed topics …
generation techniques stands out as one of the most captivating and widely discussed topics …
SATO: Stable Text-to-Motion Framework
Is the Text to Motion model robust? Recent advancements in Text to Motion models primarily
stem from more accurate predictions of specific actions. However, the text modality typically …
stem from more accurate predictions of specific actions. However, the text modality typically …