Enhancing fine-tuning based backdoor defense with sharpness-aware minimization
Backdoor defense, which aims to detect or mitigate the effect of malicious triggers introduced
by attackers, is becoming increasingly critical for machine learning security and integrity …
by attackers, is becoming increasingly critical for machine learning security and integrity …
Less is more: Fewer interpretable region via submodular subset selection
Image attribution algorithms aim to identify important regions that are highly relevant to
model decisions. Although existing attribution solutions can effectively assign importance to …
model decisions. Although existing attribution solutions can effectively assign importance to …
Poisoned forgery face: Towards backdoor attacks on face forgery detection
The proliferation of face forgery techniques has raised significant concerns within society,
thereby motivating the development of face forgery detection methods. These methods aim …
thereby motivating the development of face forgery detection methods. These methods aim …
Ensemble-based blackbox attacks on dense prediction
We propose an approach for adversarial attacks on dense prediction models (such as object
detectors and segmentation). It is well known that the attacks generated by a single …
detectors and segmentation). It is well known that the attacks generated by a single …
Privacy-enhancing face obfuscation guided by semantic-aware attribution maps
Face recognition technology is increasingly being integrated into our daily life, eg Face ID.
With the advancement of machine learning algorithms, the personal information such as …
With the advancement of machine learning algorithms, the personal information such as …
Does few-shot learning suffer from backdoor attacks?
The field of few-shot learning (FSL) has shown promising results in scenarios where training
data is limited, but its vulnerability to backdoor attacks remains largely unexplored. We first …
data is limited, but its vulnerability to backdoor attacks remains largely unexplored. We first …
Vl-trojan: Multimodal instruction backdoor attacks against autoregressive visual language models
Autoregressive Visual Language Models (VLMs) showcase impressive few-shot learning
capabilities in a multimodal context. Recently, multimodal instruction tuning has been …
capabilities in a multimodal context. Recently, multimodal instruction tuning has been …
Fast propagation is better: Accelerating single-step adversarial training via sampling subnetworks
Adversarial training has shown promise in building robust models against adversarial
examples. A major drawback of adversarial training is the computational overhead …
examples. A major drawback of adversarial training is the computational overhead …
Isolation and induction: Training robust deep neural networks against model stealing attacks
Despite the broad application of Machine Learning models as a Service (MLaaS), they are
vulnerable to model stealing attacks. These attacks can replicate the model functionality by …
vulnerable to model stealing attacks. These attacks can replicate the model functionality by …
Face Encryption via Frequency-Restricted Identity-Agnostic Attacks
Billions of people are sharing their daily live images on social media everyday. However,
malicious collectors use deep face recognition systems to easily steal their biometric …
malicious collectors use deep face recognition systems to easily steal their biometric …