Prompt-specific poisoning attacks on text-to-image generative models
Data poisoning attacks manipulate training data to introduce unexpected behaviors into
machine learning models at training time. For text-to-image generative models with massive …
machine learning models at training time. For text-to-image generative models with massive …
Badmerging: Backdoor attacks against model merging
Fine-tuning pre-trained models for downstream tasks has led to a proliferation of open-
sourced task-specific models. Recently, Model Merging (MM) has emerged as an effective …
sourced task-specific models. Recently, Model Merging (MM) has emerged as an effective …
Ssl-cleanse: Trojan detection and mitigation in self-supervised learning
Self-supervised learning (SSL) is a prevalent approach for encoding data representations.
Using a pre-trained SSL image encoder and subsequently training a downstream classifier …
Using a pre-trained SSL image encoder and subsequently training a downstream classifier …
Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
Trained on billions of images, diffusion-based text-to-image models seem impervious to
traditional data poisoning attacks, which typically require poison samples approaching 20 …
traditional data poisoning attacks, which typically require poison samples approaching 20 …
Semantic Shield: Defending Vision-Language Models Against Backdooring and Poisoning via Fine-grained Knowledge Alignment
In recent years there has been enormous interest in vision-language models trained using
self-supervised objectives. However the use of large-scale datasets scraped from the web …
self-supervised objectives. However the use of large-scale datasets scraped from the web …
Transtroj: Transferable backdoor attacks to pre-trained models via embedding indistinguishability
Pre-trained models (PTMs) are extensively utilized in various downstream tasks. Adopting
untrusted PTMs may suffer from backdoor attacks, where the adversary can compromise the …
untrusted PTMs may suffer from backdoor attacks, where the adversary can compromise the …
FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models
Few-shot classification with foundation models (eg, CLIP, DINOv2, PaLM-2) enables users
to build an accurate classifier with a few labeled training samples (called support samples) …
to build an accurate classifier with a few labeled training samples (called support samples) …
Exploring the Vulnerability of Self-supervised Monocular Depth Estimation Models
Recent advancements in deep learning have substantially boosted the performance of
monocular depth estimation (MDE), an essential component in fully-vision-based …
monocular depth estimation (MDE), an essential component in fully-vision-based …
Backdoor Contrastive Learning via Bi-level Trigger Optimization
Contrastive Learning (CL) has attracted enormous attention due to its remarkable capability
in unsupervised representation learning. However, recent works have revealed the …
in unsupervised representation learning. However, recent works have revealed the …
[图书][B] Secure and Private Large Transformers
M Zheng - 2023 - search.proquest.com
Deep Learning's integration into critical sectors like autonomous vehicles and healthcare
diagnosis underscores the necessity for creating learning methods that are safe, secure …
diagnosis underscores the necessity for creating learning methods that are safe, secure …