A Survey on Stability of Learning with Limited Labelled Data and its Sensitivity to the Effects of Randomness

B Pecher, I Srba, M Bielikova - ACM Computing Surveys, 2024 - dl.acm.org
Learning with limited labelled data, such as prompting, in-context learning, fine-tuning, meta-
learning or few-shot learning, aims to effectively train a model using only a small amount of …

Maml and anil provably learn representations

L Collins, A Mokhtari, S Oh… - … on Machine Learning, 2022 - proceedings.mlr.press
Recent empirical evidence has driven conventional wisdom to believe that gradient-based
meta-learning (GBML) methods perform well at few-shot learning because they learn an …

Alp: Data augmentation using lexicalized pcfgs for few-shot text classification

HH Kim, D Woo, SJ Oh, JW Cha, YS Han - Proceedings of the aaai …, 2022 - ojs.aaai.org
Data augmentation has been an important ingredient for boosting performances of learned
models. Prior data augmentation methods for few-shot text classification have led to great …

On sensitivity of meta-learning to support data

M Agarwal, M Yurochkin, Y Sun - Advances in Neural …, 2021 - proceedings.neurips.cc
Meta-learning algorithms are widely used for few-shot learning. For example, image
recognition systems that readily adapt to unseen classes after seeing only a few labeled …

MetaMedSeg: volumetric meta-learning for few-shot organ segmentation

A Farshad, A Makarevich, V Belagiannis… - MICCAI Workshop on …, 2022 - Springer
The lack of sufficient annotated image data is a common issue in medical image
segmentation. For some organs and densities, the annotation may be scarce, leading to …

On the Effects of Randomness on Stability of Learning with Limited Labelled Data: A Systematic Literature Review

B Pecher, I Srba, M Bielikova - arXiv preprint arXiv:2312.01082, 2023 - arxiv.org
Learning with limited labelled data, such as few-shot learning, meta-learning or transfer
learning, aims to effectively train a model using only small amount of labelled samples …

Two sides of meta-learning evaluation: In vs. out of distribution

A Setlur, O Li, V Smith - Advances in neural information …, 2021 - proceedings.neurips.cc
We categorize meta-learning evaluation into two settings: $\textit {in-distribution} $[ID], in
which the train and test tasks are sampled $\textit {iid} $ from the same underlying task …

On Sensitivity of Learning with Limited Labelled Data to the Effects of Randomness: Impact of Interactions and Systematic Choices

B Pecher, I Srba, M Bielikova - arXiv preprint arXiv:2402.12817, 2024 - arxiv.org
While learning with limited labelled data can improve performance when the labels are
lacking, it is also sensitive to the effects of uncontrolled randomness introduced by so-called …

The effect of diversity in meta-learning

R Kumar, T Deleu, Y Bengio - Proceedings of the AAAI Conference on …, 2023 - ojs.aaai.org
Recent studies show that task distribution plays a vital role in the meta-learner's
performance. Conventional wisdom is that task diversity should improve the performance of …

Metamedseg: volumetric meta-learning for few-shot organ segmentation

A Makarevich, A Farshad, V Belagiannis… - arXiv preprint arXiv …, 2021 - arxiv.org
The lack of sufficient annotated image data is a common issue in medical image
segmentation. For some organs and densities, the annotation may be scarce, leading to …