Backdoor attacks against deep learning systems in the physical world E Wenger, J Passananti, AN Bhagoji, Y Yao, H Zheng, BY Zhao Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2021 | 180 | 2021 |
Prompt-specific poisoning attacks on text-to-image generative models S Shan, W Ding, J Passananti, H Zheng, BY Zhao arXiv preprint arXiv:2310.13828, 2023 | 32 | 2023 |
Backdoor attacks on facial recognition in the physical world E Wenger, J Passananti, Y Yao, H Zheng, BY Zhao arXiv preprint arXiv:2006.14580 1, 2020 | 27 | 2020 |
Finding naturally occurring physical backdoors in image datasets E Wenger, R Bhattacharjee, AN Bhagoji, J Passananti, E Andere, ... Advances in Neural Information Processing Systems 35, 22103-22116, 2022 | 13 | 2022 |
Backdoor attacks against deep learning systems in the physical world. 2021 IEEE E Wenger, J Passananti, AN Bhagoji, Y Yao, H Zheng, BY Zhao CVF Conference on Computer Vision and Pattern Recognition (CVPR), 6202-6211, 2020 | 12 | 2020 |
Natural backdoor datasets E Wenger, R Bhattacharjee, AN Bhagoji, J Passananti, E Andere, ... arXiv preprint arXiv:2206.10673, 2022 | 8 | 2022 |
Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models S Shan, W Ding, J Passananti, S Wu, H Zheng, BY Zhao 2024 IEEE Symposium on Security and Privacy (SP), 212-212, 2024 | 6 | 2024 |
Organic or Diffused: Can We Distinguish Human Art from AI-generated Images? AYJ Ha, J Passananti, R Bhaskar, S Shan, R Southen, H Zheng, BY Zhao arXiv preprint arXiv:2402.03214, 2024 | 5 | 2024 |
Assessing privacy risks from feature vector reconstruction attacks E Wenger, F Falzon, J Passananti, H Zheng, BY Zhao arXiv preprint arXiv:2202.05760, 2022 | 2 | 2022 |
Disrupting Style Mimicry Attacks on Video Imagery J Passananti, S Wu, S Shan, H Zheng, BY Zhao arXiv preprint arXiv:2405.06865, 2024 | 1 | 2024 |