Data and model poisoning backdoor attacks on wireless federated learning, and the defense mechanisms: A comprehensive survey

Y Wan, Y Qu, W Ni, Y Xiang, L Gao… - … Surveys & Tutorials, 2024 - ieeexplore.ieee.org
Due to the greatly improved capabilities of devices, massive data, and increasing concern
about data privacy, Federated Learning (FL) has been increasingly considered for …

You Are Catching My Attention: Are Vision Transformers Bad Learners under Backdoor Attacks?

Z Yuan, P Zhou, K Zou… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Abstract Vision Transformers (ViTs), which made a splash in the field of computer vision
(CV), have shaken the dominance of convolutional neural networks (CNNs). However, in the …

Backdoor cleansing with unlabeled data

L Pang, T Sun, H Ling, C Chen - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Due to the increasing computational demand of Deep Neural Networks (DNNs), companies
and organizations have begun to outsource the training process. However, the externally …

Fine-mixing: Mitigating backdoors in fine-tuned language models

Z Zhang, L Lyu, X Ma, C Wang, X Sun - arXiv preprint arXiv:2210.09545, 2022 - arxiv.org
Deep Neural Networks (DNNs) are known to be vulnerable to backdoor attacks. In Natural
Language Processing (NLP), DNNs are often backdoored during the fine-tuning process of …

[HTML][HTML] Security threats to agricultural artificial intelligence: Position and perspective

Y Gao, SA Camtepe, NH Sultan, HT Bui… - … and Electronics in …, 2024 - Elsevier
In light of their remarkable predictive capabilities, artificial intelligence (AI) models driven by
deep learning (DL) have witnessed widespread adoption in the agriculture sector …

Distilling cognitive backdoor patterns within an image

H Huang, X Ma, S Erfani, J Bailey - arXiv preprint arXiv:2301.10908, 2023 - arxiv.org
This paper proposes a simple method to distill and detect backdoor patterns within an
image:\emph {Cognitive Distillation}(CD). The idea is to extract the" minimal essence" from …

Backdoor attacks on time series: A generative approach

Y Jiang, X Ma, SM Erfani… - 2023 IEEE Conference on …, 2023 - ieeexplore.ieee.org
Backdoor attacks have emerged as one of the major security threats to deep learning
models as they can easily control the model's test-time predictions by pre-injecting a …

Physical Backdoor: Towards Temperature-based Backdoor Attacks in the Physical World

W Yin, J Lou, P Zhou, Y Xie, D Feng… - Proceedings of the …, 2024 - openaccess.thecvf.com
Backdoor attacks have been well-studied in visible light object detection (VLOD) in recent
years. However VLOD can not effectively work in dark and temperature-sensitive scenarios …

Object detection and crowd analysis using deep learning techniques: Comprehensive review and future directions

B Ganga, BT Lata, KR Venugopal - Neurocomputing, 2024 - Elsevier
Object detection using deep learning has attracted considerable interest from researchers
because of its competency in performing state-of-the-art tasks, including detection …

Class-agnostic counting with feature augmentation and similarity comparison

M Shao, G Wang - Multimedia Systems, 2023 - Springer
The paper addresses the challenging problem of counting objects or entities in an image
without relying on specific category information, known as class-agnostic counting (CAC) …