A review of the state of the art and future challenges of deep learning-based beamforming

H Al Kassir, ZD Zaharis, PI Lazaridis… - IEEE …, 2022 - ieeexplore.ieee.org
The key objective of this paper is to explore the recent state-of-the-art artificial intelligence
(AI) applications on the broad field of beamforming. Hence, a multitude of AI-oriented …

UNSSOR: unsupervised neural speech separation by leveraging over-determined training mixtures

ZQ Wang, S Watanabe - Advances in Neural Information …, 2024 - proceedings.neurips.cc
In reverberant conditions with multiple concurrent speakers, each microphone acquires a
mixture signal of multiple speakers at a different location. In over-determined conditions …

Neural full-rank spatial covariance analysis for blind source separation

Y Bando, K Sekiguchi, Y Masuyama… - IEEE Signal …, 2021 - ieeexplore.ieee.org
This paper describes aneural blind source separation (BSS) method based on amortized
variational inference (AVI) of a non-linear generative model of mixture signals. A classical …

End-to-end multi-channel transformer for speech recognition

FJ Chang, M Radfar, A Mouchtaris… - ICASSP 2021-2021 …, 2021 - ieeexplore.ieee.org
Transformers are powerful neural architectures that allow integrating different modalities
using attention mechanisms. In this paper, we leverage the neural transformer architectures …

Integration of neural networks and probabilistic spatial models for acoustic blind source separation

L Drude, R Haeb-Umbach - IEEE Journal of Selected Topics in …, 2019 - ieeexplore.ieee.org
We formulate a generic framework for blind source separation (BSS), which allows
integrating data-driven spectro-temporal methods, such as deep clustering and deep …

Multi-channel transformer transducer for speech recognition

FJ Chang, M Radfar, A Mouchtaris… - arXiv preprint arXiv …, 2021 - arxiv.org
Multi-channel inputs offer several advantages over single-channel, to improve the
robustness of on-device speech recognition systems. Recent work on multi-channel …

[PDF][PDF] Weakly-Supervised Neural Full-Rank Spatial Covariance Analysis for a Front-End System of Distant Speech Recognition.

Y Bando, T Aizawa, K Itoyama, K Nakadai - Interspeech, 2022 - isca-archive.org
This paper presents a weakly-supervised multichannel neural speech separation method for
distant speech recognition (DSR) of real conversational speech mixtures. A blind source …

Multiple sound source localization, separation, and reconstruction by microphone array: A dnn-based approach

L Chen, G Chen, L Huang, YS Choy, W Sun - Applied Sciences, 2022 - mdpi.com
Synchronistical localization, separation, and reconstruction for multiple sound sources are
usually necessary in various situations, such as in conference rooms, living rooms, and …

USDnet: Unsupervised Speech Dereverberation via Neural Forward Filtering

ZQ Wang - arXiv preprint arXiv:2402.00820, 2024 - arxiv.org
In reverberant conditions with a single speaker, each far-field microphone records a
reverberant version of the same speaker signal at a different location. In over-determined …

Location as supervision for weakly supervised multi-channel source separation of machine sounds

R Falcon-Perez, G Wichern… - 2023 IEEE Workshop …, 2023 - ieeexplore.ieee.org
In this work, we are interested in learning a model to separate sources that cannot be
recorded in isolation, such as parts of a machine that must run simultaneously in order for …