Clean-image backdoor: Attacking multi-label models with poisoned labels only

K Chen, X Lou, G Xu, J Li, T Zhang - The Eleventh International …, 2022 - openreview.net
Multi-label models have been widely used in various applications including image
annotation and object detection. The fly in the ointment is its inherent vulnerability to …

Watch out! simple horizontal class backdoor can trivially evade defense

H Ma, S Wang, Y Gao, Z Zhang, H Qiu, M Xue… - Proceedings of the …, 2024 - dl.acm.org
All current backdoor attacks on deep learning (DL) models fall under the category of a
vertical class backdoor (VCB). In VCB attacks, any sample from a class activates the …

Django: Detecting trojans in object detection models via gaussian focus calibration

G Shen, S Cheng, G Tao, K Zhang… - Advances in …, 2023 - proceedings.neurips.cc
Object detection models are vulnerable to backdoor or trojan attacks, where an attacker can
inject malicious triggers into the model, leading to altered behavior during inference. As a …

[HTML][HTML] A qualitative AI security risk assessment of autonomous vehicles

K Grosse, A Alahi - Transportation Research Part C: Emerging …, 2024 - Elsevier
This paper systematically analyzes the security risks associated with artificial intelligence
(AI) components in autonomous vehicles (AVs). Given the increasing reliance on AI for …

Tijo: Trigger inversion with joint optimization for defending multimodal backdoored models

I Sur, K Sikka, M Walmer… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract We present a Multimodal Backdoor defense technique TIJO (Trigger Inversion
using Joint Optimization). Recently Walmer et al. demonstrated successful backdoor attacks …

Finding naturally occurring physical backdoors in image datasets

E Wenger, R Bhattacharjee… - Advances in …, 2022 - proceedings.neurips.cc
Extensive literature on backdoor poison attacks has studied attacks and defenses for
backdoors using “digital trigger patterns.” In contrast,“physical backdoors” use physical …

Versatile Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers

R Wang, H Chen, Z Zhu, L Liu, B Wu - arXiv preprint arXiv:2306.00816, 2023 - arxiv.org
Deep neural networks (DNNs) can be manipulated to exhibit specific behaviors when
exposed to specific trigger patterns, without affecting their performance on benign samples …

[HTML][HTML] Security threats to agricultural artificial intelligence: Position and perspective

Y Gao, SA Camtepe, NH Sultan, HT Bui… - … and Electronics in …, 2024 - Elsevier
In light of their remarkable predictive capabilities, artificial intelligence (AI) models driven by
deep learning (DL) have witnessed widespread adoption in the agriculture sector …

Macab: Model-agnostic clean-annotation backdoor to object detection with natural trigger in real-world

H Ma, Y Li, Y Gao, Z Zhang, A Abuadbba, A Fu… - arXiv preprint arXiv …, 2022 - arxiv.org
Object detection is the foundation of various critical computer-vision tasks such as
segmentation, object tracking, and event detection. To train an object detector with …

Horizontal class backdoor to deep learning

H Ma, S Wang, Y Gao - arXiv preprint arXiv:2310.00542, 2023 - arxiv.org
All existing backdoor attacks to deep learning (DL) models belong to the vertical class
backdoor (VCB). That is, any sample from a class will activate the implanted backdoor in the …