A comprehensive survey on test-time adaptation under distribution shifts

J Liang, R He, T Tan - International Journal of Computer Vision, 2024 - Springer
Abstract Machine learning methods strive to acquire a robust model during the training
process that can effectively generalize to test samples, even in the presence of distribution …

Towards out-of-distribution generalization: A survey

J Liu, Z Shen, Y He, X Zhang, R Xu, H Yu… - arXiv preprint arXiv …, 2021 - arxiv.org
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …

Disentangling label distribution for long-tailed visual recognition

Y Hong, S Han, K Choi, S Seo… - Proceedings of the …, 2021 - openaccess.thecvf.com
The current evaluation protocol of long-tailed visual recognition trains the classification
model on the long-tailed source label distribution and evaluates its performance on the …

Wilds: A benchmark of in-the-wild distribution shifts

PW Koh, S Sagawa, H Marklund… - International …, 2021 - proceedings.mlr.press
Distribution shifts—where the training distribution differs from the test distribution—can
substantially degrade the accuracy of machine learning (ML) systems deployed in the wild …

Leveraging unlabeled data to predict out-of-distribution performance

S Garg, S Balakrishnan, ZC Lipton… - arXiv preprint arXiv …, 2022 - arxiv.org
Real-world machine learning deployments are characterized by mismatches between the
source (training) and target (test) distributions that may cause performance drops. In this …

Improving generalization of machine learning-identified biomarkers using causal modelling with examples from immune receptor diagnostics

M Pavlović, GS Al Hajj, C Kanduri, J Pensar… - Nature Machine …, 2024 - nature.com
Abstract Machine learning is increasingly used to discover diagnostic and prognostic
biomarkers from high-dimensional molecular data. However, a variety of factors related to …

Self-attention between datapoints: Going beyond individual input-output pairs in deep learning

J Kossen, N Band, C Lyle, AN Gomez… - Advances in …, 2021 - proceedings.neurips.cc
We challenge a common assumption underlying most supervised deep learning: that a
model makes a prediction depending only on its parameters and the features of a single …

Subspace identification for multi-source domain adaptation

Z Li, R Cai, G Chen, B Sun, Z Hao… - Advances in Neural …, 2024 - proceedings.neurips.cc
Multi-source domain adaptation (MSDA) methods aim to transfer knowledge from multiple
labeled source domains to an unlabeled target domain. Although current methods achieve …

Domain adaptation under open set label shift

S Garg, S Balakrishnan… - Advances in Neural …, 2022 - proceedings.neurips.cc
We introduce the problem of domain adaptation under Open Set Label Shift (OSLS), where
the label distribution can change arbitrarily and a new class may arrive during deployment …

Complementary benefits of contrastive learning and self-training under distribution shift

S Garg, A Setlur, Z Lipton… - Advances in …, 2024 - proceedings.neurips.cc
Self-training and contrastive learning have emerged as leading techniques for incorporating
unlabeled data, both under distribution shift (unsupervised domain adaptation) and when it …