[PDF][PDF] Deep unsupervised domain adaptation: A review of recent advances and perspectives
Deep learning has become the method of choice to tackle real-world problems in different
domains, partly because of its ability to learn from data and achieve impressive performance …
domains, partly because of its ability to learn from data and achieve impressive performance …
Domain adaptation: challenges, methods, datasets, and applications
Deep Neural Networks (DNNs) trained on one dataset (source domain) do not perform well
on another set of data (target domain), which is different but has similar properties as the …
on another set of data (target domain), which is different but has similar properties as the …
Robust wav2vec 2.0: Analyzing domain shift in self-supervised pre-training
Self-supervised learning of speech representations has been a very active research area
but most work is focused on a single domain such as read audio books for which there exist …
but most work is focused on a single domain such as read audio books for which there exist …
In search for a generalizable method for source free domain adaptation
Source-free domain adaptation (SFDA) is compelling because it allows adapting an off-the-
shelf model to a new domain using only unlabelled data. In this work, we apply existing …
shelf model to a new domain using only unlabelled data. In this work, we apply existing …
Momentum pseudo-labeling: Semi-supervised asr with continuously improving pseudo-labels
End-to-end automatic speech recognition (ASR) has become a popular alternative to
traditional module-based systems, simplifying the model-building process with a single deep …
traditional module-based systems, simplifying the model-building process with a single deep …
Magic dust for cross-lingual adaptation of monolingual wav2vec-2.0
We propose a simple and effective cross-lingual transfer learning method to adapt
monolingual wav2vec-2.0 models for Automatic Speech Recognition (ASR) in resource …
monolingual wav2vec-2.0 models for Automatic Speech Recognition (ASR) in resource …
Momentum pseudo-labeling for semi-supervised speech recognition
Pseudo-labeling (PL) has been shown to be effective in semi-supervised automatic speech
recognition (ASR), where a base model is self-trained with pseudo-labels generated from …
recognition (ASR), where a base model is self-trained with pseudo-labels generated from …
Semi-supervised speech recognition via graph-based temporal classification
Semi-supervised learning has demonstrated promising results in automatic speech
recognition (ASR) by self-training using a seed ASR model with pseudo-labels generated for …
recognition (ASR) by self-training using a seed ASR model with pseudo-labels generated for …
Alternative pseudo-labeling for semi-supervised automatic speech recognition
When labeled data is insufficient, pseudo-labeling based semi-supervised learning can
significantly improve the performance of automatic speech recognition. However, pseudo …
significantly improve the performance of automatic speech recognition. However, pseudo …
Cross-lingual self-training to learn multilingual representation for low-resource speech recognition
ZQ Zhang, Y Song, MH Wu, X Fang… - Circuits, Systems, and …, 2022 - Springer
Abstract Representation learning or pre-training has shown promising performance for low-
resource speech recognition which suffers from the data shortage. Recently, self-supervised …
resource speech recognition which suffers from the data shortage. Recently, self-supervised …