[PDF][PDF] Deep unsupervised domain adaptation: A review of recent advances and perspectives

X Liu, C Yoo, F Xing, H Oh, G El Fakhri… - … on Signal and …, 2022 - nowpublishers.com
Deep learning has become the method of choice to tackle real-world problems in different
domains, partly because of its ability to learn from data and achieve impressive performance …

Domain adaptation: challenges, methods, datasets, and applications

P Singhal, R Walambe, S Ramanna, K Kotecha - IEEE access, 2023 - ieeexplore.ieee.org
Deep Neural Networks (DNNs) trained on one dataset (source domain) do not perform well
on another set of data (target domain), which is different but has similar properties as the …

Robust wav2vec 2.0: Analyzing domain shift in self-supervised pre-training

WN Hsu, A Sriram, A Baevski, T Likhomanenko… - arXiv preprint arXiv …, 2021 - arxiv.org
Self-supervised learning of speech representations has been a very active research area
but most work is focused on a single domain such as read audio books for which there exist …

In search for a generalizable method for source free domain adaptation

M Boudiaf, T Denton… - International …, 2023 - proceedings.mlr.press
Source-free domain adaptation (SFDA) is compelling because it allows adapting an off-the-
shelf model to a new domain using only unlabelled data. In this work, we apply existing …

Momentum pseudo-labeling: Semi-supervised asr with continuously improving pseudo-labels

Y Higuchi, N Moritz, J Le Roux… - IEEE Journal of Selected …, 2022 - ieeexplore.ieee.org
End-to-end automatic speech recognition (ASR) has become a popular alternative to
traditional module-based systems, simplifying the model-building process with a single deep …

Magic dust for cross-lingual adaptation of monolingual wav2vec-2.0

S Khurana, A Laurent, J Glass - ICASSP 2022-2022 IEEE …, 2022 - ieeexplore.ieee.org
We propose a simple and effective cross-lingual transfer learning method to adapt
monolingual wav2vec-2.0 models for Automatic Speech Recognition (ASR) in resource …

Momentum pseudo-labeling for semi-supervised speech recognition

Y Higuchi, N Moritz, JL Roux, T Hori - arXiv preprint arXiv:2106.08922, 2021 - arxiv.org
Pseudo-labeling (PL) has been shown to be effective in semi-supervised automatic speech
recognition (ASR), where a base model is self-trained with pseudo-labels generated from …

Semi-supervised speech recognition via graph-based temporal classification

N Moritz, T Hori, J Le Roux - ICASSP 2021-2021 IEEE …, 2021 - ieeexplore.ieee.org
Semi-supervised learning has demonstrated promising results in automatic speech
recognition (ASR) by self-training using a seed ASR model with pseudo-labels generated for …

Alternative pseudo-labeling for semi-supervised automatic speech recognition

H Zhu, D Gao, G Cheng, D Povey… - … /ACM Transactions on …, 2023 - ieeexplore.ieee.org
When labeled data is insufficient, pseudo-labeling based semi-supervised learning can
significantly improve the performance of automatic speech recognition. However, pseudo …

Cross-lingual self-training to learn multilingual representation for low-resource speech recognition

ZQ Zhang, Y Song, MH Wu, X Fang… - Circuits, Systems, and …, 2022 - Springer
Abstract Representation learning or pre-training has shown promising performance for low-
resource speech recognition which suffers from the data shortage. Recently, self-supervised …