Measuring robustness to natural distribution shifts in image classification

R Taori, A Dave, V Shankar, N Carlini… - Advances in …, 2020 - proceedings.neurips.cc
We study how robust current ImageNet models are to distribution shifts arising from natural
variations in datasets. Most research on robustness focuses on synthetic image …

Predictive overfitting in immunological applications: Pitfalls and solutions

JP Gygi, SH Kleinstein, L Guan - Human Vaccines & …, 2023 - Taylor & Francis
Overfitting describes the phenomenon where a highly predictive model on the training data
generalizes poorly to future observations. It is a common concern when applying machine …

Adamatch: A unified approach to semi-supervised learning and domain adaptation

D Berthelot, R Roelofs, K Sohn, N Carlini… - arXiv preprint arXiv …, 2021 - arxiv.org
We extend semi-supervised learning to the problem of domain adaptation to learn
significantly higher-accuracy models that train on one data distribution and test on a different …

Understanding and mitigating the tradeoff between robustness and accuracy

A Raghunathan, SM Xie, F Yang, J Duchi… - arXiv preprint arXiv …, 2020 - arxiv.org
Adversarial training augments the training set with perturbations to improve the robust error
(over worst-case perturbations), but it often leads to an increase in the standard error (on …

Fuzz testing based data augmentation to improve robustness of deep neural networks

X Gao, RK Saha, MR Prasad… - Proceedings of the acm …, 2020 - dl.acm.org
Deep neural networks (DNN) have been shown to be notoriously brittle to small
perturbations in their input data. This problem is analogous to the over-fitting problem in test …

Towards viewpoint-invariant visual recognition via adversarial training

S Ruan, Y Dong, H Su, J Peng… - Proceedings of the …, 2023 - openaccess.thecvf.com
Visual recognition models are not invariant to viewpoint changes in the 3D world, as
different viewing directions can dramatically affect the predictions given the same object …

Optimism in the face of adversity: Understanding and improving deep learning through adversarial robustness

G Ortiz-Jiménez, A Modas… - Proceedings of the …, 2021 - ieeexplore.ieee.org
Driven by massive amounts of data and important advances in computational resources,
new deep learning systems have achieved outstanding results in a large spectrum of …

Sensei: Sensitive set invariance for enforcing individual fairness

M Yurochkin, Y Sun - arXiv preprint arXiv:2006.14168, 2020 - arxiv.org
In this paper, we cast fair machine learning as invariant machine learning. We first formulate
a version of individual fairness that enforces invariance on certain sensitive sets. We then …

Improving viewpoint robustness for visual recognition via adversarial training

S Ruan, Y Dong, H Su, J Peng, N Chen… - arXiv preprint arXiv …, 2023 - arxiv.org
Viewpoint invariance remains challenging for visual recognition in the 3D world, as altering
the viewing directions can significantly impact predictions for the same object. While …

The good, the bad and the ugly sides of data augmentation: An implicit spectral regularization perspective

CH Lin, C Kaushik, EL Dyer, V Muthukumar - Journal of Machine Learning …, 2024 - jmlr.org
Data augmentation (DA) is a powerful workhorse for bolstering performance in modern
machine learning. Specific augmentations like translations and scaling in computer vision …