Image shortcut squeezing: Countering perturbative availability poisons with compression

Z Liu, Z Zhao, M Larson - International conference on …, 2023 - proceedings.mlr.press
Perturbative availability poisoning (PAP) adds small changes to images to prevent their use
for model training. Current research adopts the belief that practical and effective approaches …

Adversarial attack on attackers: Post-process to mitigate black-box score-based query attacks

S Chen, Z Huang, Q Tao, Y Wu… - Advances in Neural …, 2022 - proceedings.neurips.cc
The score-based query attacks (SQAs) pose practical threats to deep neural networks by
crafting adversarial perturbations within dozens of queries, only using the model's output …

One-pixel shortcut: on the learning preference of deep neural networks

S Wu, S Chen, C Xie, X Huang - arXiv preprint arXiv:2205.12141, 2022 - arxiv.org
Unlearnable examples (ULEs) aim to protect data from unauthorized usage for training
DNNs. Existing work adds $\ell_\infty $-bounded perturbations to the original sample so that …

ECLIPSE: Expunging clean-label indiscriminate poisons via sparse diffusion purification

X Wang, S Hu, Y Zhang, Z Zhou, LY Zhang… - … on Research in …, 2024 - Springer
Clean-label indiscriminate poisoning attacks add invisible perturbations to correctly labeled
training images, thus dramatically reducing the generalization capability of the victim …

Safeguarding medical image segmentation datasets against unauthorized training via contour-and texture-aware perturbations

X Lin, Y Yu, S Xia, J Jiang, H Wang, Z Yu, Y Liu… - arXiv preprint arXiv …, 2024 - arxiv.org
The widespread availability of publicly accessible medical images has significantly
propelled advancements in various research and clinical fields. Nonetheless, concerns …

Corrupting convolution-based unlearnable datasets with pixel-based image transformations

X Wang, S Hu, M Li, Z Yu, Z Zhou, LY Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Unlearnable datasets lead to a drastic drop in the generalization performance of models
trained on them by introducing elaborate and imperceptible perturbations into clean training …

Efficient Availability Attacks against Supervised and Contrastive Learning Simultaneously

Y Wang, Y Zhu, XS Gao - arXiv preprint arXiv:2402.04010, 2024 - arxiv.org
Availability attacks can prevent the unauthorized use of private data and commercial
datasets by generating imperceptible noise and making unlearnable examples before …

Provably Unlearnable Examples

D Wang, M Xue, B Li, S Camtepe, L Zhu - arXiv preprint arXiv:2405.03316, 2024 - arxiv.org
The exploitation of publicly accessible data has led to escalating concerns regarding data
privacy and intellectual property (IP) breaches in the age of artificial intelligence. As a …

Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders

Y Yu, Y Wang, S Xia, W Yang, S Lu, YP Tan… - arXiv preprint arXiv …, 2024 - arxiv.org
Unlearnable examples (UEs) seek to maximize testing error by making subtle modifications
to training examples that are correctly labeled. Defenses against these poisoning attacks …

Enhancing Transferability of Targeted Adversarial Examples: A Self-Universal Perspective

B Peng, L Liu, T Liu, Z Liu, Y Liu - arXiv preprint arXiv:2407.15683, 2024 - arxiv.org
Transfer-based targeted adversarial attacks against black-box deep neural networks (DNNs)
have been proven to be significantly more challenging than untargeted ones. The …