Cross-entropy loss functions: Theoretical analysis and applications

A Mao, M Mohri, Y Zhong - International conference on …, 2023 - proceedings.mlr.press
Cross-entropy is a widely used loss function in applications. It coincides with the logistic loss
applied to the outputs of a neural network, when the softmax is used. But, what guarantees …

Evaluating the adversarial robustness of adaptive test-time defenses

F Croce, S Gowal, T Brunner… - International …, 2022 - proceedings.mlr.press
Adaptive defenses, which optimize at test time, promise to improve adversarial robustness.
We categorize such adaptive test-time defenses, explain their potential benefits and …

A survey of robust adversarial training in pattern recognition: Fundamental, theory, and methodologies

Z Qian, K Huang, QF Wang, XY Zhang - Pattern Recognition, 2022 - Elsevier
Deep neural networks have achieved remarkable success in machine learning, computer
vision, and pattern recognition in the last few decades. Recent studies, however, show that …

Adversarial robustness of deep learning: Theory, algorithms, and applications

W Ruan, X Yi, X Huang - Proceedings of the 30th ACM international …, 2021 - dl.acm.org
This tutorial aims to introduce the fundamentals of adversarial robustness of deep learning,
presenting a well-structured review of up-to-date techniques to assess the vulnerability of …

Perturbation diversity certificates robust generalization

Z Qian, S Zhang, K Huang, Q Wang, X Yi, B Gu… - Neural Networks, 2024 - Elsevier
Whilst adversarial training has been proven to be one most effective defending method
against adversarial attacks for deep neural networks, it suffers from over-fitting on training …

Domain invariant adversarial learning

M Levi, I Attias, A Kontorovich - arXiv preprint arXiv:2104.00322, 2021 - arxiv.org
The phenomenon of adversarial examples illustrates one of the most basic vulnerabilities of
deep neural networks. Among the variety of techniques introduced to surmount this inherent …

Push stricter to decide better: A class-conditional feature adaptive framework for improving adversarial robustness

JL Yin, B Chen, W Zhu, BH Chen… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
In response to the threat of adversarial examples, adversarial training provides an attractive
option for improving robustness by training models on online-augmented adversarial …

Direct Adversarial Latent Estimation to Evaluate Decision Boundary Complexity in Black Box Models

AS Dale, L Christopher - IEEE Transactions on Artificial …, 2024 - ieeexplore.ieee.org
A trustworthy AI model should be robust to perturbed data, where robustness correlates with
the dimensionality and linearity of feature representations in the model latent space. Existing …

Robust generative adversarial network

S Zhang, Z Qian, K Huang, R Zhang, J Xiao, Y He… - Machine Learning, 2023 - Springer
Abstract Generative Adversarial Networks (GANs) are one of the most popular and powerful
models to learn the complex high dimensional distributions. However, they usually suffer …

Evaluating and Improving the Robustness of Image Classifiers against Adversarial Attacks

F Croce - 2024 - tobias-lib.ub.uni-tuebingen.de
The decisions of state-of-the-art image classifiers based on neural networks can be easily
changed by small perturbations of the input which, at the same time, would not fool humans …