An image is worth one word: Personalizing text-to-image generation using textual inversion
Text-to-image models offer unprecedented freedom to guide creation through natural
language. Yet, it is unclear how such freedom can be exercised to generate images of …
language. Yet, it is unclear how such freedom can be exercised to generate images of …
Inversion-based style transfer with diffusion models
The artistic style within a painting is the means of expression, which includes not only the
painting material, colors, and brushstrokes, but also the high-level attributes, including …
painting material, colors, and brushstrokes, but also the high-level attributes, including …
Sine: Single image editing with text-to-image diffusion models
Recent works on diffusion models have demonstrated a strong capability for conditioning
image generation, eg, text-guided image synthesis. Such success inspires many efforts …
image generation, eg, text-guided image synthesis. Such success inspires many efforts …
High-fidelity generalized emotional talking face generation with multi-modal emotion space learning
Recently, emotional talking face generation has received considerable attention. However,
existing methods only adopt one-hot coding, image, or audio as emotion conditions, thus …
existing methods only adopt one-hot coding, image, or audio as emotion conditions, thus …
[PDF][PDF] TeSTNeRF: Text-Driven 3D Style Transfer via Cross-Modal Learning.
Text-driven 3D style transfer aims at stylizing a scene according to the text and generating
arbitrary novel views with consistency. Simply combining image/video style transfer methods …
arbitrary novel views with consistency. Simply combining image/video style transfer methods …
RAST: Restorable arbitrary style transfer via multi-restoration
Arbitrary style transfer aims at reproducing the target image with provided artistic or photo-
realistic styles. Even though existing approaches can successfully transfer style information …
realistic styles. Even though existing approaches can successfully transfer style information …
AVID: Any-Length Video Inpainting with Diffusion Model
Recent advances in diffusion models have successfully enabled text-guided image
inpainting. While it seems straightforward to extend such editing capability into the video …
inpainting. While it seems straightforward to extend such editing capability into the video …
FineStyle: Semantic-Aware Fine-Grained Motion Style Transfer with Dual Interactive-Flow Fusion
We present FineStyle, a novel framework for motion style transfer that generates expressive
human animations with specific styles for virtual reality and vision fields. It incorporates …
human animations with specific styles for virtual reality and vision fields. It incorporates …
Mimetic models: Ethical implications of ai that acts like you
An emerging theme in artificial intelligence research is the creation of models to simulate the
decisions and behavior of specific people, in domains including game-playing, text …
decisions and behavior of specific people, in domains including game-playing, text …
Preserving structural consistency in arbitrary artist and artwork style transfer
Deep generative models are effective in style transfer. Previous methods learn one or
several specific artist-style from a collection of artworks. These methods not only …
several specific artist-style from a collection of artworks. These methods not only …