Mesh2ir: Neural acoustic impulse response generator for complex 3d scenes

A Ratnarajah, Z Tang, R Aralikatti… - Proceedings of the 30th …, 2022 - dl.acm.org
We propose a mesh-based neural network (MESH2IR) to generate acoustic impulse
responses (IRs) for indoor 3D scenes represented using a mesh. The IRs are used to create …

Realimpact: A dataset of impact sound fields for real objects

S Clarke, R Gao, M Wang, M Rau… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Objects make unique sounds under different perturbations, environment conditions,
and poses relative to the listener. While prior works have modeled impact sounds and sound …

Dance-to-music generation with encoder-based textual inversion of diffusion models

S Li, W Dong, Y Zhang, F Tang, C Ma… - arXiv preprint arXiv …, 2024 - arxiv.org
The harmonious integration of music with dance movements is pivotal in vividly conveying
the artistic essence of dance. This alignment also significantly elevates the immersive quality …

Dance-to-Music Generation with Encoder-based Textual Inversion

S Li, W Dong, Y Zhang, F Tang, C Ma… - SIGGRAPH Asia 2024 …, 2024 - dl.acm.org
The seamless integration of music with dance movements is essential for communicating the
artistic intent of a dance piece. This alignment also significantly improves the immersive …

Rigid-body sound synthesis with differentiable modal resonators

R Diaz, B Hayes, C Saitis, G Fazekas… - ICASSP 2023-2023 …, 2023 - ieeexplore.ieee.org
Physical models of rigid bodies are used for sound synthesis in applications from virtual
environments to music production. Traditional methods, such as modal synthesis, often rely …

Differentiable Modal Synthesis for Physical Modeling of Planar String Sound and Motion Simulation

JW Lee, J Park, MJ Choi, K Lee - arXiv preprint arXiv:2407.05516, 2024 - arxiv.org
While significant advancements have been made in music generation and differentiable
sound synthesis within machine learning and computer audition, the simulation of …

DiffSound: Differentiable Modal Sound Rendering and Inverse Rendering for Diverse Inference Tasks

X Jin, C Xu, R Gao, J Wu, G Wang, S Li - ACM SIGGRAPH 2024 …, 2024 - dl.acm.org
Accurately estimating and simulating the physical properties of objects from real-world
sound recordings is of great practical importance in the fields of vision, graphics, and …

Interactive Neural Resonators

R Diaz, C Saitis, M Sandler - arXiv preprint arXiv:2305.14867, 2023 - arxiv.org
In this work, we propose a method for the controllable synthesis of real-time contact sounds
using neural resonators. Previous works have used physically inspired statistical methods …

AutoSFX: Automatic Sound Effect Generation for Videos

Y Wang, Z Wang, H Huang - Proceedings of the 32nd ACM International …, 2024 - dl.acm.org
Sound Effect (SFX) generation, primarily aims to automatically produce sound waves for
sounding visual objects in images or videos. Rather than learning an automatic solution to …

SonifyAR: Context-Aware Sound Generation in Augmented Reality

X Su, JE Froehlich, E Koh, C Xiao - arXiv preprint arXiv:2405.07089, 2024 - arxiv.org
Sound plays a crucial role in enhancing user experience and immersiveness in Augmented
Reality (AR). However, current platforms lack support for AR sound authoring due to limited …