Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

A backdoor attack against 3d point cloud classifiers

Z Xiang, DJ Miller, S Chen, X Li… - Proceedings of the …, 2021 - openaccess.thecvf.com
Vulnerability of 3D point cloud (PC) classifiers has become a grave concern due to the
popularity of 3D sensors in safety-critical applications. Existing adversarial attacks against …

Umd: Unsupervised model detection for x2x backdoor attacks

Z Xiang, Z Xiong, B Li - International Conference on …, 2023 - proceedings.mlr.press
Backdoor (Trojan) attack is a common threat to deep neural networks, where samples from
one or more source classes embedded with a backdoor trigger will be misclassified to …

Defenses in adversarial machine learning: A survey

B Wu, S Wei, M Zhu, M Zheng, Z Zhu, M Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Adversarial phenomenon has been widely observed in machine learning (ML) systems,
especially in those using deep neural networks, describing that ML systems may produce …

CBD: A certified backdoor detector based on local dominant probability

Z Xiang, Z Xiong, B Li - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Backdoor attack is a common threat to deep neural networks. During testing, samples
embedded with a backdoor trigger will be misclassified as an adversarial target by a …

Detecting backdoor attacks against point cloud classifiers

Z Xiang, DJ Miller, S Chen, X Li… - ICASSP 2022-2022 …, 2022 - ieeexplore.ieee.org
Backdoor attacks (BA) are an emerging threat to deep neural network classifiers. A classifier
being attacked will predict to the attacker's target class when a test sample from a source …

Reverse engineering imperceptible backdoor attacks on deep neural networks for detection and training set cleansing

Z Xiang, DJ Miller, G Kesidis - Computers & Security, 2021 - Elsevier
Backdoor data poisoning (aka Trojan attack) is an emerging form of adversarial attack
usually against deep neural network image classifiers. The attacker poisons the training set …

Universal post-training backdoor detection

H Wang, Z Xiang, DJ Miller, G Kesidis - arXiv preprint arXiv:2205.06900, 2022 - arxiv.org
A Backdoor attack (BA) is an important type of adversarial attack against deep neural
network classifiers, wherein test samples from one or more source classes will be (mis) …

Improved activation clipping for universal backdoor mitigation and test-time detection

H Wang, Z Xiang, DJ Miller… - 2024 IEEE 34th …, 2024 - ieeexplore.ieee.org
Deep neural networks are vulnerable to backdoor attacks (Trojans), where an attacker
poisons the training set with backdoor triggers so that the neural network learns to classify …

Detecting scene-plausible perceptible backdoors in trained DNNs without access to the training set

Z Xiang, DJ Miller, H Wang, G Kesidis - Neural computation, 2021 - direct.mit.edu
Backdoor data poisoning attacks add mislabeled examples to the training set, with an
embedded backdoor pattern, so that the classifier learns to classify to a target class …