Detecting backdoors in pre-trained encoders
Self-supervised learning in computer vision trains on unlabeled data, such as images or
(image, text) pairs, to obtain an image encoder that learns high-quality embeddings for input …
(image, text) pairs, to obtain an image encoder that learns high-quality embeddings for input …
Segment and complete: Defending object detectors against adversarial patch attacks with robust patch detection
Object detection plays a key role in many security-critical systems. Adversarial patch attacks,
which are easy to implement in the physical world, pose a serious threat to state-of-the-art …
which are easy to implement in the physical world, pose a serious threat to state-of-the-art …
{PatchGuard}: A provably robust defense against adversarial patches via small receptive fields and masking
Localized adversarial patches aim to induce misclassification in machine learning models
by arbitrarily modifying pixels within a restricted region of an image. Such attacks can be …
by arbitrarily modifying pixels within a restricted region of an image. Such attacks can be …
{PatchCleanser}: Certifiably robust defense against adversarial patches for any image classifier
The adversarial patch attack against image classification models aims to inject adversarially
crafted pixels within a restricted image region (ie, a patch) for inducing model …
crafted pixels within a restricted image region (ie, a patch) for inducing model …
Harnessing perceptual adversarial patches for crowd counting
Crowd counting, which has been widely adopted for estimating the number of people in
safety-critical scenes, is shown to be vulnerable to adversarial examples in the physical …
safety-critical scenes, is shown to be vulnerable to adversarial examples in the physical …
Detectorguard: Provably securing object detectors against localized patch hiding attacks
State-of-the-art object detectors are vulnerable to localized patch hiding attacks, where an
adversary introduces a small adversarial patch to make detectors miss the detection of …
adversary introduces a small adversarial patch to make detectors miss the detection of …
Elijah: Eliminating backdoors injected in diffusion models via distribution shift
Diffusion models (DM) have become state-of-the-art generative models because of their
capability of generating high-quality images from noises without adversarial training …
capability of generating high-quality images from noises without adversarial training …
Adversarial patch attacks and defences in vision-based tasks: A survey
Adversarial attacks in deep learning models, especially for safety-critical systems, are
gaining more and more attention in recent years, due to the lack of trust in the security and …
gaining more and more attention in recent years, due to the lack of trust in the security and …
REAP: a large-scale realistic adversarial patch benchmark
N Hingun, C Sitawarin, J Li… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Abstract Machine learning models are known to be susceptible to adversarial perturbation.
One famous attack is the adversarial patch, a particularly crafted sticker that makes the …
One famous attack is the adversarial patch, a particularly crafted sticker that makes the …
Patchguard++: Efficient provable attack detection against adversarial patches
An adversarial patch can arbitrarily manipulate image pixels within a restricted region to
induce model misclassification. The threat of this localized attack has gained significant …
induce model misclassification. The threat of this localized attack has gained significant …