Image shortcut squeezing: Countering perturbative availability poisons with compression
Perturbative availability poisoning (PAP) adds small changes to images to prevent their use
for model training. Current research adopts the belief that practical and effective approaches …
for model training. Current research adopts the belief that practical and effective approaches …
Adversarial attack on attackers: Post-process to mitigate black-box score-based query attacks
The score-based query attacks (SQAs) pose practical threats to deep neural networks by
crafting adversarial perturbations within dozens of queries, only using the model's output …
crafting adversarial perturbations within dozens of queries, only using the model's output …
One-pixel shortcut: on the learning preference of deep neural networks
Unlearnable examples (ULEs) aim to protect data from unauthorized usage for training
DNNs. Existing work adds $\ell_\infty $-bounded perturbations to the original sample so that …
DNNs. Existing work adds $\ell_\infty $-bounded perturbations to the original sample so that …
ECLIPSE: Expunging clean-label indiscriminate poisons via sparse diffusion purification
Clean-label indiscriminate poisoning attacks add invisible perturbations to correctly labeled
training images, thus dramatically reducing the generalization capability of the victim …
training images, thus dramatically reducing the generalization capability of the victim …
Safeguarding medical image segmentation datasets against unauthorized training via contour-and texture-aware perturbations
The widespread availability of publicly accessible medical images has significantly
propelled advancements in various research and clinical fields. Nonetheless, concerns …
propelled advancements in various research and clinical fields. Nonetheless, concerns …
Corrupting convolution-based unlearnable datasets with pixel-based image transformations
Unlearnable datasets lead to a drastic drop in the generalization performance of models
trained on them by introducing elaborate and imperceptible perturbations into clean training …
trained on them by introducing elaborate and imperceptible perturbations into clean training …
Efficient Availability Attacks against Supervised and Contrastive Learning Simultaneously
Y Wang, Y Zhu, XS Gao - arXiv preprint arXiv:2402.04010, 2024 - arxiv.org
Availability attacks can prevent the unauthorized use of private data and commercial
datasets by generating imperceptible noise and making unlearnable examples before …
datasets by generating imperceptible noise and making unlearnable examples before …
Provably Unlearnable Examples
The exploitation of publicly accessible data has led to escalating concerns regarding data
privacy and intellectual property (IP) breaches in the age of artificial intelligence. As a …
privacy and intellectual property (IP) breaches in the age of artificial intelligence. As a …
Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders
Unlearnable examples (UEs) seek to maximize testing error by making subtle modifications
to training examples that are correctly labeled. Defenses against these poisoning attacks …
to training examples that are correctly labeled. Defenses against these poisoning attacks …
Enhancing Transferability of Targeted Adversarial Examples: A Self-Universal Perspective
Transfer-based targeted adversarial attacks against black-box deep neural networks (DNNs)
have been proven to be significantly more challenging than untargeted ones. The …
have been proven to be significantly more challenging than untargeted ones. The …