Mm-bd: Post-training detection of backdoor attacks with arbitrary backdoor pattern types using a maximum margin statistic

H Wang, Z Xiang, DJ Miller… - 2024 IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Backdoor attacks are an important type of adversarial threat against deep neural network
classifiers, wherein test samples from one or more source classes will be (mis) classified to …

Towards reliable and efficient backdoor trigger inversion via decoupling benign features

X Xu, K Huang, Y Li, Z Qin, K Ren - The Twelfth International …, 2024 - openreview.net
Recent studies revealed that using third-party models may lead to backdoor threats, where
adversaries can maliciously manipulate model predictions based on backdoors implanted …

Towards stealthy backdoor attacks against speech recognition via elements of sound

H Cai, P Zhang, H Dong, Y Xiao… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Deep neural networks (DNNs) have been widely and successfully adopted and deployed in
various applications of speech recognition. Recently, a few works revealed that these …

Lotus: Evasive and resilient backdoor attacks through sub-partitioning

S Cheng, G Tao, Y Liu, G Shen, S An… - Proceedings of the …, 2024 - openaccess.thecvf.com
Backdoor attack poses a significant security threat to Deep Learning applications. Existing
attacks are often not evasive to established backdoor detection techniques. This …

CBD: A certified backdoor detector based on local dominant probability

Z Xiang, Z Xiong, B Li - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Backdoor attack is a common threat to deep neural networks. During testing, samples
embedded with a backdoor trigger will be misclassified as an adversarial target by a …

IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency

L Hou, R Feng, Z Hua, W Luo, LY Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries can
maliciously trigger model misclassifications by implanting a hidden backdoor during model …

Model X-ray: Detecting Backdoored Models via Decision Boundary

Y Su, J Zhang, T Xu, T Zhang, W Zhang… - Proceedings of the 32nd …, 2024 - dl.acm.org
Backdoor attacks pose a significant security vulnerability for deep neural networks (DNNs),
enabling them to operate normally on clean inputs but manipulate predictions when specific …

Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models

Z Ni, R Ye, Y Wei, Z Xiang, Y Wang, S Chen - arXiv preprint arXiv …, 2024 - arxiv.org
Vision-Large-Language-models (VLMs) have great application prospects in autonomous
driving. Despite the ability of VLMs to comprehend and make decisions in complex …

FLARE: Towards Universal Dataset Purification against Backdoor Attacks

L Hou, W Luo, Z Hua, S Chen, LY Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
Deep neural networks (DNNs) are susceptible to backdoor attacks, where adversaries
poison datasets with adversary-specified triggers to implant hidden backdoors, enabling …

Adaptive Robust Learning Against Backdoor Attacks in Smart Homes

J Zhang, Z Wang, Z Ma, J Ma - IEEE Internet of Things Journal, 2024 - ieeexplore.ieee.org
Smart homes provide various services that serve people using AI (artificial intelligence)
models. In order to meet the changing demands, devices in smart homes independently …