Wild patterns reloaded: A survey of machine learning security against training data poisoning
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …
and large training datasets. The training data is used to learn new models or update existing …
A backdoor attack against 3d point cloud classifiers
Vulnerability of 3D point cloud (PC) classifiers has become a grave concern due to the
popularity of 3D sensors in safety-critical applications. Existing adversarial attacks against …
popularity of 3D sensors in safety-critical applications. Existing adversarial attacks against …
Umd: Unsupervised model detection for x2x backdoor attacks
Backdoor (Trojan) attack is a common threat to deep neural networks, where samples from
one or more source classes embedded with a backdoor trigger will be misclassified to …
one or more source classes embedded with a backdoor trigger will be misclassified to …
Defenses in adversarial machine learning: A survey
Adversarial phenomenon has been widely observed in machine learning (ML) systems,
especially in those using deep neural networks, describing that ML systems may produce …
especially in those using deep neural networks, describing that ML systems may produce …
CBD: A certified backdoor detector based on local dominant probability
Backdoor attack is a common threat to deep neural networks. During testing, samples
embedded with a backdoor trigger will be misclassified as an adversarial target by a …
embedded with a backdoor trigger will be misclassified as an adversarial target by a …
Detecting backdoor attacks against point cloud classifiers
Backdoor attacks (BA) are an emerging threat to deep neural network classifiers. A classifier
being attacked will predict to the attacker's target class when a test sample from a source …
being attacked will predict to the attacker's target class when a test sample from a source …
Reverse engineering imperceptible backdoor attacks on deep neural networks for detection and training set cleansing
Backdoor data poisoning (aka Trojan attack) is an emerging form of adversarial attack
usually against deep neural network image classifiers. The attacker poisons the training set …
usually against deep neural network image classifiers. The attacker poisons the training set …
Universal post-training backdoor detection
A Backdoor attack (BA) is an important type of adversarial attack against deep neural
network classifiers, wherein test samples from one or more source classes will be (mis) …
network classifiers, wherein test samples from one or more source classes will be (mis) …
Improved activation clipping for universal backdoor mitigation and test-time detection
Deep neural networks are vulnerable to backdoor attacks (Trojans), where an attacker
poisons the training set with backdoor triggers so that the neural network learns to classify …
poisons the training set with backdoor triggers so that the neural network learns to classify …
Detecting scene-plausible perceptible backdoors in trained DNNs without access to the training set
Backdoor data poisoning attacks add mislabeled examples to the training set, with an
embedded backdoor pattern, so that the classifier learns to classify to a target class …
embedded backdoor pattern, so that the classifier learns to classify to a target class …