Image shortcut squeezing: Countering perturbative availability poisons with compression

Z Liu, Z Zhao, M Larson - International conference on …, 2023 - proceedings.mlr.press
Perturbative availability poisoning (PAP) adds small changes to images to prevent their use
for model training. Current research adopts the belief that practical and effective approaches …

What can we learn from unlearnable datasets?

P Sandoval-Segura, V Singla… - Advances in …, 2024 - proceedings.neurips.cc
In an era of widespread web scraping, unlearnable dataset methods have the potential to
protect data privacy by preventing deep neural networks from generalizing. But in addition to …

Learning the unlearnable: Adversarial augmentations suppress unlearnable example attacks

T Qin, X Gao, J Zhao, K Ye, CZ Xu - arXiv preprint arXiv:2303.15127, 2023 - arxiv.org
Unlearnable example attacks are data poisoning techniques that can be used to safeguard
public data against unauthorized use for training deep learning models. These methods add …

Unlearnable examples give a false sense of security: Piercing through unexploitable data with learnable examples

W Jiang, Y Diao, H Wang, J Sun, M Wang… - Proceedings of the 31st …, 2023 - dl.acm.org
Safeguarding data from unauthorized exploitation is vital for privacy and security, especially
in recent rampant research in security breach such as adversarial/membership attacks. To …

Adversarial attack on attackers: Post-process to mitigate black-box score-based query attacks

S Chen, Z Huang, Q Tao, Y Wu… - Advances in Neural …, 2022 - proceedings.neurips.cc
The score-based query attacks (SQAs) pose practical threats to deep neural networks by
crafting adversarial perturbations within dozens of queries, only using the model's output …

Self-ensemble protection: Training checkpoints are good data protectors

S Chen, G Yuan, X Cheng, Y Gong, M Qin… - arXiv preprint arXiv …, 2022 - arxiv.org
As data becomes increasingly vital, a company would be very cautious about releasing data,
because the competitors could use it to train high-performance models, thereby posing a …

APBench: A unified benchmark for availability poisoning attacks and defenses

T Qin, X Gao, J Zhao, K Ye, CZ Xu - arXiv preprint arXiv:2308.03258, 2023 - arxiv.org
The efficacy of availability poisoning, a method of poisoning data by injecting imperceptible
perturbations to prevent its use in model training, has been a hot subject of investigation …

Detection and defense of unlearnable examples

Y Zhu, L Yu, XS Gao - Proceedings of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
Privacy preserving has become increasingly critical with the emergence of social media.
Unlearnable examples have been proposed to avoid leaking personal information on the …

Semantic deep hiding for robust unlearnable examples

R Meng, C Yi, Y Yu, S Yang, B Shen… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Ensuring data privacy and protection has become paramount in the era of deep learning.
Unlearnable examples are proposed to mislead the deep learning models and prevent data …

Corrupting convolution-based unlearnable datasets with pixel-based image transformations

X Wang, S Hu, M Li, Z Yu, Z Zhou, LY Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Unlearnable datasets lead to a drastic drop in the generalization performance of models
trained on them by introducing elaborate and imperceptible perturbations into clean training …