Adversarial machine learning in image classification: A survey toward the defender's perspective

GR Machado, E Silva, RR Goldschmidt - ACM Computing Surveys …, 2021 - dl.acm.org
Deep Learning algorithms have achieved state-of-the-art performance for Image
Classification. For this reason, they have been used even in security-critical applications …

Expectations for artificial intelligence (AI) in psychiatry

S Monteith, T Glenn, J Geddes, PC Whybrow… - Current Psychiatry …, 2022 - Springer
Abstract Purpose of Review Artificial intelligence (AI) is often presented as a transformative
technology for clinical medicine even though the current technology maturity of AI is low. The …

Square attack: a query-efficient black-box adversarial attack via random search

M Andriushchenko, F Croce, N Flammarion… - European conference on …, 2020 - Springer
Abstract We propose the Square Attack, a score-based black-box l_2 l 2-and l_ ∞ l∞-
adversarial attack that does not rely on local gradient information and thus is not affected by …

Witches' brew: Industrial scale data poisoning via gradient matching

J Geiping, L Fowl, WR Huang, W Czaja… - arXiv preprint arXiv …, 2020 - arxiv.org
Data Poisoning attacks modify training data to maliciously control a model trained on such
data. In this work, we focus on targeted poisoning attacks which cause a reclassification of …

Shape matters: deformable patch attack

Z Chen, B Li, S Wu, J Xu, S Ding, W Zhang - European conference on …, 2022 - Springer
Though deep neural networks (DNNs) have demonstrated excellent performance in
computer vision, they are susceptible and vulnerable to carefully crafted adversarial …

Sentinet: Detecting localized universal attacks against deep learning systems

E Chou, F Tramer, G Pellegrino - 2020 IEEE Security and …, 2020 - ieeexplore.ieee.org
SentiNet is a novel detection framework for localized universal attacks on neural networks.
These attacks restrict adversarial noise to contiguous portions of an image and are reusable …

Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects

MA Alcorn, Q Li, Z Gong, C Wang… - Proceedings of the …, 2019 - openaccess.thecvf.com
Despite excellent performance on stationary test sets, deep neural networks (DNNs) can fail
to generalize to out-of-distribution (OoD) inputs, including natural, non-adversarial ones …

Perceptual-sensitive gan for generating adversarial patches

A Liu, X Liu, J Fan, Y Ma, A Zhang, H Xie… - Proceedings of the AAAI …, 2019 - ojs.aaai.org
Deep neural networks (DNNs) are vulnerable to adversarial examples where inputs with
imperceptible perturbations mislead DNNs to incorrect results. Recently, adversarial patch …

Towards practical certifiable patch defense with vision transformer

Z Chen, B Li, J Xu, S Wu, S Ding… - Proceedings of the …, 2022 - openaccess.thecvf.com
Patch attacks, one of the most threatening forms of physical attack in adversarial examples,
can lead networks to induce misclassification by modifying pixels arbitrarily in a continuous …

Performance vs. competence in human–machine comparisons

C Firestone - Proceedings of the National Academy of …, 2020 - National Acad Sciences
Does the human mind resemble the machines that can behave like it? Biologically inspired
machine-learning systems approach “human-level” accuracy in an astounding variety of …