A review of single-source deep unsupervised visual domain adaptation

S Zhao, X Yue, S Zhang, B Li, H Zhao… - … on Neural Networks …, 2020 - ieeexplore.ieee.org
Large-scale labeled training datasets have enabled deep neural networks to excel across a
wide range of benchmark vision tasks. However, in many applications, it is prohibitively …

A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability

X Huang, D Kroening, W Ruan, J Sharp, Y Sun… - Computer Science …, 2020 - Elsevier
In the past few years, significant progress has been made on deep neural networks (DNNs)
in achieving human-level performance on several long-standing tasks. With the broader …

Robustbench: a standardized adversarial robustness benchmark

F Croce, M Andriushchenko, V Sehwag… - arXiv preprint arXiv …, 2020 - arxiv.org
As a research community, we are still lacking a systematic understanding of the progress on
adversarial robustness which often makes it hard to identify the most promising ideas in …

Do adversarially robust imagenet models transfer better?

H Salman, A Ilyas, L Engstrom… - Advances in Neural …, 2020 - proceedings.neurips.cc
Transfer learning is a widely-used paradigm in deep learning, where models pre-trained on
standard datasets can be efficiently adapted to downstream tasks. Typically, better pre …

Understanding and improving fast adversarial training

M Andriushchenko… - Advances in Neural …, 2020 - proceedings.neurips.cc
A recent line of work focused on making adversarial training computationally efficient for
deep learning models. In particular, Wong et al.(2020) showed that $\ell_\infty $-adversarial …

Certified adversarial robustness via randomized smoothing

J Cohen, E Rosenfeld, Z Kolter - international conference on …, 2019 - proceedings.mlr.press
We show how to turn any classifier that classifies well under Gaussian noise into a new
classifier that is certifiably robust to adversarial perturbations under the L2 norm. While this" …

Rethinking lipschitz neural networks and certified robustness: A boolean function perspective

B Zhang, D Jiang, D He… - Advances in neural …, 2022 - proceedings.neurips.cc
Designing neural networks with bounded Lipschitz constant is a promising way to obtain
certifiably robust classifiers against adversarial examples. However, the relevant progress …

An abstract domain for certifying neural networks

G Singh, T Gehr, M Püschel, M Vechev - Proceedings of the ACM on …, 2019 - dl.acm.org
We present a novel method for scalable and precise certification of deep neural networks.
The key technical insight behind our approach is a new abstract domain which combines …

Provably robust deep learning via adversarially trained smoothed classifiers

H Salman, J Li, I Razenshteyn… - Advances in neural …, 2019 - proceedings.neurips.cc
Recent works have shown the effectiveness of randomized smoothing as a scalable
technique for building neural network-based classifiers that are provably robust to $\ell_2 …

Efficient neural network robustness certification with general activation functions

H Zhang, TW Weng, PY Chen… - Advances in neural …, 2018 - proceedings.neurips.cc
Finding minimum distortion of adversarial examples and thus certifying robustness in neural
networks classifiers is known to be a challenging problem. Nevertheless, recently it has …