Vector-quantized neural networks for acoustic unit discovery in the zerospeech 2020 challenge B Van Niekerk, L Nortje, H Kamper arXiv preprint arXiv:2005.09409, 2020 | 132 | 2020 |
Unsupervised acoustic unit discovery for speech synthesis using discrete latent-variable neural networks R Eloff, A Nortje, B van Niekerk, A Govender, L Nortje, A Pretorius, ... arXiv preprint arXiv:1904.07556, 2019 | 59 | 2019 |
Analyzing speaker information in self-supervised models to improve zero-resource speech processing B van Niekerk, L Nortje, M Baas, H Kamper arXiv preprint arXiv:2108.00917, 2021 | 32 | 2021 |
Unsupervised vs. transfer learning for multimodal one-shot matching of speech and images L Nortje, H Kamper arXiv preprint arXiv:2008.06258, 2020 | 12 | 2020 |
Direct multimodal few-shot learning of speech and images L Nortje, H Kamper arXiv preprint arXiv:2012.05680, 2020 | 7 | 2020 |
Towards visually prompted keyword localisation for zero-resource spoken languages L Nortje, H Kamper 2022 IEEE Spoken Language Technology Workshop (SLT), 700-707, 2023 | 6 | 2023 |
Visually grounded few-shot word learning in low-resource settings L Nortje, D Oneaţă, H Kamper IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2024 | 2 | 2024 |
Visually grounded few-shot word acquisition with fewer shots L Nortje, B van Niekerk, H Kamper arXiv preprint arXiv:2305.15937, 2023 | 1 | 2023 |
Visually Grounded Speech Models Have a Mutual Exclusivity Bias L Nortje, D Oneaţă, Y Matusevych, H Kamper Transactions of the Association for Computational Linguistics 12, 755-770, 2024 | | 2024 |
Direct and indirect multimodal few-shot learning of speech and images L Nortje Stellenbosch: Stellenbosch University, 2020 | | 2020 |