Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models S Shan, W Ding, J Passananti, S Wu, H Zheng, BY Zhao 2024 IEEE Symposium on Security and Privacy (SP), 212-212, 2024 | 35* | 2024 |
How to combine membership-inference attacks on multiple updated machine learning models M Jagielski, S Wu, A Oprea, J Ullman, R Geambasu Proceedings on Privacy Enhancing Technologies, 2023 | 15* | 2023 |
Tmi! finetuned models leak private information from their pretraining data J Abascal, S Wu, A Oprea, J Ullman arXiv preprint arXiv:2306.01181, 2023 | 7 | 2023 |
Disrupting Style Mimicry Attacks on Video Imagery J Passananti, S Wu, S Shan, H Zheng, BY Zhao arXiv preprint arXiv:2405.06865, 2024 | 1 | 2024 |
A Response to Glaze Purification via IMPRESS S Shan, S Wu, H Zheng, BY Zhao arXiv preprint arXiv:2312.07731, 2023 | 1 | 2023 |
TMI! Finetuned Models Spill Secrets from Pretraining J Abascal, S Wu, A Oprea, J Ullman The Second Workshop on New Frontiers in Adversarial Machine Learning, 2023 | 1 | 2023 |