One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training Models
Vision-Language Pre-training (VLP) models trained on large-scale image-text pairs have
demonstrated unprecedented capability in many practical applications. However, previous …
demonstrated unprecedented capability in many practical applications. However, previous …
One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training Models
H Fang, J Kong, W Yu, B Chen, J Li, S Xia… - arXiv e …, 2024 - ui.adsabs.harvard.edu
Abstract Vision-Language Pre-training (VLP) models trained on large-scale image-text pairs
have demonstrated unprecedented capability in many practical applications. However …
have demonstrated unprecedented capability in many practical applications. However …