A closer look at accuracy vs. robustness

YY Yang, C Rashtchian, H Zhang… - Advances in neural …, 2020 - proceedings.neurips.cc
Current methods for training robust networks lead to a drop in test accuracy, which has led
prior works to posit that a robustness-accuracy tradeoff may be inevitable in deep learning …

Certified robustness via dynamic margin maximization and improved lipschitz regularization

M Fazlyab, T Entesari, A Roy… - Advances in Neural …, 2024 - proceedings.neurips.cc
To improve the robustness of deep classifiers against adversarial perturbations, many
approaches have been proposed, such as designing new architectures with better …

Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)

Z Zhu, F Liu, G Chrysos… - Advances in neural …, 2022 - proceedings.neurips.cc
We study the average robustness notion in deep neural networks in (selected) wide and
narrow, deep and shallow, as well as lazy and non-lazy training settings. We prove that in …

DECODE: Deep confidence network for robust image classification

G Ding, Y Guo, K Chen, C Chu, J Han… - IEEE Transactions on …, 2019 - ieeexplore.ieee.org
Recent years have witnessed the success of deep convolutional neural networks for image
classification and many related tasks. It should be pointed out that the existing training …

Rethinking lipschitz neural networks and certified robustness: A boolean function perspective

B Zhang, D Jiang, D He… - Advances in neural …, 2022 - proceedings.neurips.cc
Designing neural networks with bounded Lipschitz constant is a promising way to obtain
certifiably robust classifiers against adversarial examples. However, the relevant progress …

Direct parameterization of lipschitz-bounded deep networks

R Wang, I Manchester - International Conference on …, 2023 - proceedings.mlr.press
This paper introduces a new parameterization of deep neural networks (both fully-connected
and convolutional) with guaranteed $\ell^ 2$ Lipschitz bounds, ie limited sensitivity to input …

A systematic review of robustness in deep learning for computer vision: Mind the gap?

N Drenkow, N Sani, I Shpitser, M Unberath - arXiv preprint arXiv …, 2021 - arxiv.org
Deep neural networks for computer vision are deployed in increasingly safety-critical and
socially-impactful applications, motivating the need to close the gap in model performance …

The robustness of deep networks: A geometrical perspective

A Fawzi, SM Moosavi-Dezfooli… - IEEE Signal Processing …, 2017 - ieeexplore.ieee.org
Deep neural networks have recently shown impressive classification performance on a
diverse set of visual tasks. When deployed in real-world (noise-prone) environments, it is …

Almost-orthogonal layers for efficient general-purpose lipschitz networks

B Prach, CH Lampert - European Conference on Computer Vision, 2022 - Springer
It is a highly desirable property for deep networks to be robust against small input changes.
One popular way to achieve this property is by designing networks with a small Lipschitz …

Why robust generalization in deep learning is difficult: Perspective of expressive power

B Li, J Jin, H Zhong, J Hopcroft… - Advances in Neural …, 2022 - proceedings.neurips.cc
It is well-known that modern neural networks are vulnerable to adversarial examples. To
mitigate this problem, a series of robust learning algorithms have been proposed. However …