Unlabeled data improves adversarial robustness
We demonstrate, theoretically and empirically, that adversarial robustness can significantly
benefit from semisupervised learning. Theoretically, we revisit the simple Gaussian model of …
benefit from semisupervised learning. Theoretically, we revisit the simple Gaussian model of …
The pitfalls of simplicity bias in neural networks
Several works have proposed Simplicity Bias (SB)---the tendency of standard training
procedures such as Stochastic Gradient Descent (SGD) to find simple models---to justify why …
procedures such as Stochastic Gradient Descent (SGD) to find simple models---to justify why …
Adversarial examples from computational constraints
Why are classifiers in high dimension vulnerable to “adversarial” perturbations? We show
that it is likely not due to information theoretic limitations, but rather it could be due to …
that it is likely not due to information theoretic limitations, but rather it could be due to …
How benign is benign overfitting?
We investigate two causes for adversarial vulnerability in deep neural networks: bad data
and (poorly) trained models. When trained with SGD, deep neural networks essentially …
and (poorly) trained models. When trained with SGD, deep neural networks essentially …
Adversarial learning guarantees for linear hypotheses and neural networks
Adversarial or test time robustness measures the susceptibility of a classifier to perturbations
to the test input. While there has been a flurry of recent work on designing defenses against …
to the test input. While there has been a flurry of recent work on designing defenses against …
On the existence of the adversarial bayes classifier
Adversarial robustness is a critical property in a variety of modern machine learning
applications. While it has been the subject of several recent theoretical studies, many …
applications. While it has been the subject of several recent theoretical studies, many …
On the hardness of robust classification
It is becoming increasingly important to understand the vulnerability of machine learning
models to adversarial attacks. In this paper we study the feasibility of adversarially robust …
models to adversarial attacks. In this paper we study the feasibility of adversarially robust …
The complexity of adversarially robust proper learning of halfspaces with agnostic noise
I Diakonikolas, DM Kane… - Advances in Neural …, 2020 - proceedings.neurips.cc
We study the computational complexity of adversarially robust proper learning of halfspaces
in the distribution-independent agnostic PAC model, with a focus on $ L_p $ perturbations …
in the distribution-independent agnostic PAC model, with a focus on $ L_p $ perturbations …
Improving adversarial robustness via unlabeled out-of-domain data
Data augmentation by incorporating cheap unlabeled data from multiple domains is a
powerful way to improve prediction especially when there is limited labeled data. In this …
powerful way to improve prediction especially when there is limited labeled data. In this …
Robust and private learning of halfspaces
In this work, we study the trade-off between differential privacy and adversarial robustness
under $ L_2 $-perturbations in the context of learning halfspaces. We prove nearly tight …
under $ L_2 $-perturbations in the context of learning halfspaces. We prove nearly tight …