Adversarial machine learning in image classification: A survey toward the defender's perspective
GR Machado, E Silva, RR Goldschmidt - ACM Computing Surveys …, 2021 - dl.acm.org
Deep Learning algorithms have achieved state-of-the-art performance for Image
Classification. For this reason, they have been used even in security-critical applications …
Classification. For this reason, they have been used even in security-critical applications …
Expectations for artificial intelligence (AI) in psychiatry
Abstract Purpose of Review Artificial intelligence (AI) is often presented as a transformative
technology for clinical medicine even though the current technology maturity of AI is low. The …
technology for clinical medicine even though the current technology maturity of AI is low. The …
Square attack: a query-efficient black-box adversarial attack via random search
Abstract We propose the Square Attack, a score-based black-box l_2 l 2-and l_ ∞ l∞-
adversarial attack that does not rely on local gradient information and thus is not affected by …
adversarial attack that does not rely on local gradient information and thus is not affected by …
Witches' brew: Industrial scale data poisoning via gradient matching
Data Poisoning attacks modify training data to maliciously control a model trained on such
data. In this work, we focus on targeted poisoning attacks which cause a reclassification of …
data. In this work, we focus on targeted poisoning attacks which cause a reclassification of …
Shape matters: deformable patch attack
Though deep neural networks (DNNs) have demonstrated excellent performance in
computer vision, they are susceptible and vulnerable to carefully crafted adversarial …
computer vision, they are susceptible and vulnerable to carefully crafted adversarial …
Sentinet: Detecting localized universal attacks against deep learning systems
SentiNet is a novel detection framework for localized universal attacks on neural networks.
These attacks restrict adversarial noise to contiguous portions of an image and are reusable …
These attacks restrict adversarial noise to contiguous portions of an image and are reusable …
Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects
Despite excellent performance on stationary test sets, deep neural networks (DNNs) can fail
to generalize to out-of-distribution (OoD) inputs, including natural, non-adversarial ones …
to generalize to out-of-distribution (OoD) inputs, including natural, non-adversarial ones …
Perceptual-sensitive gan for generating adversarial patches
Deep neural networks (DNNs) are vulnerable to adversarial examples where inputs with
imperceptible perturbations mislead DNNs to incorrect results. Recently, adversarial patch …
imperceptible perturbations mislead DNNs to incorrect results. Recently, adversarial patch …
Towards practical certifiable patch defense with vision transformer
Patch attacks, one of the most threatening forms of physical attack in adversarial examples,
can lead networks to induce misclassification by modifying pixels arbitrarily in a continuous …
can lead networks to induce misclassification by modifying pixels arbitrarily in a continuous …
Performance vs. competence in human–machine comparisons
C Firestone - Proceedings of the National Academy of …, 2020 - National Acad Sciences
Does the human mind resemble the machines that can behave like it? Biologically inspired
machine-learning systems approach “human-level” accuracy in an astounding variety of …
machine-learning systems approach “human-level” accuracy in an astounding variety of …