A review of single-source deep unsupervised visual domain adaptation
Large-scale labeled training datasets have enabled deep neural networks to excel across a
wide range of benchmark vision tasks. However, in many applications, it is prohibitively …
wide range of benchmark vision tasks. However, in many applications, it is prohibitively …
A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability
In the past few years, significant progress has been made on deep neural networks (DNNs)
in achieving human-level performance on several long-standing tasks. With the broader …
in achieving human-level performance on several long-standing tasks. With the broader …
Robustbench: a standardized adversarial robustness benchmark
As a research community, we are still lacking a systematic understanding of the progress on
adversarial robustness which often makes it hard to identify the most promising ideas in …
adversarial robustness which often makes it hard to identify the most promising ideas in …
Do adversarially robust imagenet models transfer better?
Transfer learning is a widely-used paradigm in deep learning, where models pre-trained on
standard datasets can be efficiently adapted to downstream tasks. Typically, better pre …
standard datasets can be efficiently adapted to downstream tasks. Typically, better pre …
Understanding and improving fast adversarial training
M Andriushchenko… - Advances in Neural …, 2020 - proceedings.neurips.cc
A recent line of work focused on making adversarial training computationally efficient for
deep learning models. In particular, Wong et al.(2020) showed that $\ell_\infty $-adversarial …
deep learning models. In particular, Wong et al.(2020) showed that $\ell_\infty $-adversarial …
Certified adversarial robustness via randomized smoothing
We show how to turn any classifier that classifies well under Gaussian noise into a new
classifier that is certifiably robust to adversarial perturbations under the L2 norm. While this" …
classifier that is certifiably robust to adversarial perturbations under the L2 norm. While this" …
Rethinking lipschitz neural networks and certified robustness: A boolean function perspective
Designing neural networks with bounded Lipschitz constant is a promising way to obtain
certifiably robust classifiers against adversarial examples. However, the relevant progress …
certifiably robust classifiers against adversarial examples. However, the relevant progress …
An abstract domain for certifying neural networks
We present a novel method for scalable and precise certification of deep neural networks.
The key technical insight behind our approach is a new abstract domain which combines …
The key technical insight behind our approach is a new abstract domain which combines …
Provably robust deep learning via adversarially trained smoothed classifiers
Recent works have shown the effectiveness of randomized smoothing as a scalable
technique for building neural network-based classifiers that are provably robust to $\ell_2 …
technique for building neural network-based classifiers that are provably robust to $\ell_2 …
Efficient neural network robustness certification with general activation functions
Finding minimum distortion of adversarial examples and thus certifying robustness in neural
networks classifiers is known to be a challenging problem. Nevertheless, recently it has …
networks classifiers is known to be a challenging problem. Nevertheless, recently it has …