Set-level guidance attack: Boosting adversarial transferability of vision-language pre-training models

D Lu, Z Wang, T Wang, W Guan… - Proceedings of the …, 2023 - openaccess.thecvf.com
Vision-language pre-training (VLP) models have shown vulnerability to adversarial
examples in multimodal tasks. Furthermore, malicious adversaries can be deliberately …

Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models

D Lu, Z Wang, T Wang, W Guan, H Gao… - arXiv preprint arXiv …, 2023 - arxiv.org
Vision-language pre-training (VLP) models have shown vulnerability to adversarial
examples in multimodal tasks. Furthermore, malicious adversaries can be deliberately …

Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models

D Lu, Z Wang, T Wang, W Guan, H Gao… - arXiv e …, 2023 - ui.adsabs.harvard.edu
Vision-language pre-training (VLP) models have shown vulnerability to adversarial
examples in multimodal tasks. Furthermore, malicious adversaries can be deliberately …

Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models

D Lu, Z Wang, T Wang, W Guan… - 2023 IEEE/CVF …, 2023 - ieeexplore.ieee.org
Vision-language pre-training (VLP) models have shown vulnerability to adversarial
examples in multimodal tasks. Furthermore, malicious adversaries can be deliberately …

Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models

D Lu, Z Wang, T Wang, W Guan, H Gao… - 2023 IEEE/CVF …, 2023 - computer.org
Vision-language pre-training (VLP) models have shown vulnerability to adversarial
examples in multimodal tasks. Furthermore, malicious adversaries can be deliberately …