Domain generalization: A survey
Generalization to out-of-distribution (OOD) data is a capability natural to humans yet
challenging for machines to reproduce. This is because most learning algorithms strongly …
challenging for machines to reproduce. This is because most learning algorithms strongly …
Prompt-aligned gradient for prompt tuning
Thanks to the large pre-trained vision-language models (VLMs) like CLIP, we can craft a
zero-shot classifier by discrete prompt design, eg, the confidence score of an image …
zero-shot classifier by discrete prompt design, eg, the confidence score of an image …
Generalizing to unseen domains: A survey on domain generalization
Machine learning systems generally assume that the training and testing distributions are
the same. To this end, a key requirement is to develop models that can generalize to unseen …
the same. To this end, a key requirement is to develop models that can generalize to unseen …
Improving out-of-distribution robustness via selective augmentation
Abstract Machine learning algorithms typically assume that training and test examples are
drawn from the same distribution. However, distribution shift is a common problem in real …
drawn from the same distribution. However, distribution shift is a common problem in real …
Sharpness-aware gradient matching for domain generalization
The goal of domain generalization (DG) is to enhance the generalization capability of the
model learned from a source domain to other unseen domains. The recently developed …
model learned from a source domain to other unseen domains. The recently developed …
Domain generalization by mutual-information regularization with pre-trained models
Abstract Domain generalization (DG) aims to learn a generalized model to an unseen target
domain using only limited source domains. Previous attempts to DG fail to learn domain …
domain using only limited source domains. Previous attempts to DG fail to learn domain …
Federated domain generalization with generalization adjustment
Abstract Federated Domain Generalization (FedDG) attempts to learn a global model in a
privacy-preserving manner that generalizes well to new clients possibly with domain shift …
privacy-preserving manner that generalizes well to new clients possibly with domain shift …
Discover and cure: Concept-aware mitigation of spurious correlation
Deep neural networks often rely on spurious correlations to make predictions, which hinders
generalization beyond training environments. For instance, models that associate cats with …
generalization beyond training environments. For instance, models that associate cats with …
Ensemble of averages: Improving model selection and boosting performance in domain generalization
Abstract In Domain Generalization (DG) settings, models trained independently on a given
set of training domains have notoriously chaotic performance on distribution shifted test …
set of training domains have notoriously chaotic performance on distribution shifted test …
Using mixup as a regularizer can surprisingly improve accuracy & out-of-distribution robustness
We show that the effectiveness of the well celebrated Mixup can be further improved if
instead of using it as the sole learning objective, it is utilized as an additional regularizer to …
instead of using it as the sole learning objective, it is utilized as an additional regularizer to …