Image shortcut squeezing: Countering perturbative availability poisons with compression
Perturbative availability poisoning (PAP) adds small changes to images to prevent their use
for model training. Current research adopts the belief that practical and effective approaches …
for model training. Current research adopts the belief that practical and effective approaches …
What can we learn from unlearnable datasets?
P Sandoval-Segura, V Singla… - Advances in …, 2024 - proceedings.neurips.cc
In an era of widespread web scraping, unlearnable dataset methods have the potential to
protect data privacy by preventing deep neural networks from generalizing. But in addition to …
protect data privacy by preventing deep neural networks from generalizing. But in addition to …
Learning the unlearnable: Adversarial augmentations suppress unlearnable example attacks
Unlearnable example attacks are data poisoning techniques that can be used to safeguard
public data against unauthorized use for training deep learning models. These methods add …
public data against unauthorized use for training deep learning models. These methods add …
Unlearnable examples give a false sense of security: Piercing through unexploitable data with learnable examples
Safeguarding data from unauthorized exploitation is vital for privacy and security, especially
in recent rampant research in security breach such as adversarial/membership attacks. To …
in recent rampant research in security breach such as adversarial/membership attacks. To …
Adversarial attack on attackers: Post-process to mitigate black-box score-based query attacks
The score-based query attacks (SQAs) pose practical threats to deep neural networks by
crafting adversarial perturbations within dozens of queries, only using the model's output …
crafting adversarial perturbations within dozens of queries, only using the model's output …
Self-ensemble protection: Training checkpoints are good data protectors
As data becomes increasingly vital, a company would be very cautious about releasing data,
because the competitors could use it to train high-performance models, thereby posing a …
because the competitors could use it to train high-performance models, thereby posing a …
APBench: A unified benchmark for availability poisoning attacks and defenses
The efficacy of availability poisoning, a method of poisoning data by injecting imperceptible
perturbations to prevent its use in model training, has been a hot subject of investigation …
perturbations to prevent its use in model training, has been a hot subject of investigation …
Detection and defense of unlearnable examples
Y Zhu, L Yu, XS Gao - Proceedings of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
Privacy preserving has become increasingly critical with the emergence of social media.
Unlearnable examples have been proposed to avoid leaking personal information on the …
Unlearnable examples have been proposed to avoid leaking personal information on the …
Semantic deep hiding for robust unlearnable examples
Ensuring data privacy and protection has become paramount in the era of deep learning.
Unlearnable examples are proposed to mislead the deep learning models and prevent data …
Unlearnable examples are proposed to mislead the deep learning models and prevent data …
Corrupting convolution-based unlearnable datasets with pixel-based image transformations
Unlearnable datasets lead to a drastic drop in the generalization performance of models
trained on them by introducing elaborate and imperceptible perturbations into clean training …
trained on them by introducing elaborate and imperceptible perturbations into clean training …