Cartoon image processing: a survey

Y Zhao, D Ren, Y Chen, W Jia, R Wang… - International Journal of …, 2022 - Springer
With the rapid development of cartoon industry, various studies on two-dimensional (2D)
cartoon have been proposed for different application scenarios, such as quality assessment …

Pose2Pose: Pose selection and transfer for 2D character animation

NS Willett, HV Shin, Z Jin, W Li… - Proceedings of the 25th …, 2020 - dl.acm.org
An artist faces two challenges when creating a 2D animated character to mimic a specific
human performance. First, the artist must design and draw a collection of artwork depicting …

Exploring inbetween charts with trajectory-guided sliders for cutout animation

T Fukusato, A Maejima, T Igarashi… - Multimedia Tools and …, 2024 - Springer
We introduce an interactive tool to intuitively make inbetween charts for cutout character
movements (ie, transitioning from one image to another), inspired by cartoon animators' …

Real-time lip sync for live 2d animation

D Aneja, W Li - arXiv preprint arXiv:1910.08685, 2019 - arxiv.org
The emergence of commercial tools for real-time performance-based 2D animation has
enabled 2D characters to appear on live broadcasts and streaming platforms. A key …

Audio-oriented video interpolation using key pose

T Nakatsuka, Y Tsuchiya, M Hamanaka… - International Journal of …, 2021 - World Scientific
This paper describes a deep learning-based method for long-term video interpolation that
generates intermediate frames between two music performance videos of a person playing …

View-Dependent Deformation for 2.5-D Cartoon Models

T Fukusato, A Maejima - IEEE Computer Graphics and …, 2022 - ieeexplore.ieee.org
Two-and-a-half-dimensional (2.5-D) cartoon models are popular methods used for
simulating three-dimensional (3-D) movements, such as out-of-plane rotation, from two …

SoundToons: Exemplar-Based Authoring of Interactive Audio-Driven Animation Sprites

T Chong, HV Shin, D Aneja, T Igarashi - Proceedings of the 28th …, 2023 - dl.acm.org
Animations can come to life when they are synchronized with relevant sounds. Yet,
synchronizing animations to audio requires tedious key-framing or programming, which is …

Using machine-learning models to determine movements of a mouth corresponding to live speech

W Li, J Popovic, D Aneja, D Simons - US Patent 10,699,705, 2020 - Google Patents
Disclosed systems and methods predict visemes from an audio sequence. A viseme-
generation application accesses a first set of training data that includes a first audio …

View-dependent formulation of 2.5 d cartoon models

T Fukusato, A Maejima - arXiv preprint arXiv:2103.15472, 2021 - arxiv.org
2.5 D cartoon models are methods to simulate three-dimensional (3D)-like movements, such
as out-of-plane rotation, from two-dimensional (2D) shapes in different views. However …

[PDF][PDF] Gala, a study of accessible workflow in producing embodied virtual reality films

R Carpio-Alfsen - 2023 - research.bond.edu.au
This PhD research is an exegesis accompanying the virtual reality (VR) short film Gala. Gala
is an embodied virtual reality (EVR) film constructed using a hybrid method of film, games …