[HTML][HTML] Pre-trained language models and their applications

H Wang, J Li, H Wu, E Hovy, Y Sun - Engineering, 2023 - Elsevier
Pre-trained language models have achieved striking success in natural language
processing (NLP), leading to a paradigm shift from supervised learning to pre-training …

Data and model poisoning backdoor attacks on wireless federated learning, and the defense mechanisms: A comprehensive survey

Y Wan, Y Qu, W Ni, Y Xiang, L Gao… - … Surveys & Tutorials, 2024 - ieeexplore.ieee.org
Due to the greatly improved capabilities of devices, massive data, and increasing concern
about data privacy, Federated Learning (FL) has been increasingly considered for …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST Xia - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Backdoor defense via decoupling the training process

K Huang, Y Li, B Wu, Z Qin, K Ren - arXiv preprint arXiv:2202.03423, 2022 - arxiv.org
Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor
attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few …

A systematic survey of prompt engineering on vision-language foundation models

J Gu, Z Han, S Chen, A Beirami, B He, G Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Prompt engineering is a technique that involves augmenting a large pre-trained model with
task-specific hints, known as prompts, to adapt the model to new tasks. Prompts can be …

Bppattack: Stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning

Z Wang, J Zhai, S Ma - … of the IEEE/CVF Conference on …, 2022 - openaccess.thecvf.com
Deep neural networks are vulnerable to Trojan attacks. Existing attacks use visible patterns
(eg, a patch or image transformations) as triggers, which are vulnerable to human …

Dynamic backdoor attacks against machine learning models

A Salem, R Wen, M Backes, S Ma… - 2022 IEEE 7th …, 2022 - ieeexplore.ieee.org
Machine learning (ML) has made tremendous progress during the past decade and is being
adopted in various critical real-world applications. However, recent research has shown that …

Detecting backdoors in pre-trained encoders

S Feng, G Tao, S Cheng, G Shen… - Proceedings of the …, 2023 - openaccess.thecvf.com
Self-supervised learning in computer vision trains on unlabeled data, such as images or
(image, text) pairs, to obtain an image encoder that learns high-quality embeddings for input …

Blockchain-based two-stage federated learning with non-IID data in IoMT system

Z Lian, Q Zeng, W Wang… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
The Internet of Medical Things (IoMT) has a bright future with the development of smart
mobile devices. Information technology is also leading changes in the healthcare industry …

Rethinking the reverse-engineering of trojan triggers

Z Wang, K Mei, H Ding, J Zhai… - Advances in Neural …, 2022 - proceedings.neurips.cc
Abstract Deep Neural Networks are vulnerable to Trojan (or backdoor) attacks. Reverse-
engineering methods can reconstruct the trigger and thus identify affected models. Existing …