Backdoor learning: A survey
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …
that the attacked models perform well on benign samples, whereas their predictions will be …
Toward transparent ai: A survey on interpreting the inner structures of deep neural networks
The last decade of machine learning has seen drastic increases in scale and capabilities.
Deep neural networks (DNNs) are increasingly being deployed in the real world. However …
Deep neural networks (DNNs) are increasingly being deployed in the real world. However …
A survey of neural trojan attacks and defenses in deep learning
Artificial Intelligence (AI) relies heavily on deep learning-a technology that is becoming
increasingly popular in real-life applications of AI, even in the safety-critical and high-risk …
increasingly popular in real-life applications of AI, even in the safety-critical and high-risk …
Position paper: Challenges and opportunities in topological deep learning
Topological deep learning (TDL) is a rapidly evolving field that uses topological features to
understand and design deep learning models. This paper posits that TDL may complement …
understand and design deep learning models. This paper posits that TDL may complement …
Notable: Transferable backdoor attacks against prompt-based nlp models
Prompt-based learning is vulnerable to backdoor attacks. Existing backdoor attacks against
prompt-based models consider injecting backdoors into the entire embedding layers or word …
prompt-based models consider injecting backdoors into the entire embedding layers or word …
Attention-enhancing backdoor attacks against bert-based models
Recent studies have revealed that\textit {Backdoor Attacks} can threaten the safety of natural
language processing (NLP) models. Investigating the strategies of backdoor attacks will help …
language processing (NLP) models. Investigating the strategies of backdoor attacks will help …
Defending against patch-based backdoor attacks on self-supervised learning
Recently, self-supervised learning (SSL) was shown to be vulnerable to patch-based data
poisoning backdoor attacks. It was shown that an adversary can poison a small part of the …
poisoning backdoor attacks. It was shown that an adversary can poison a small part of the …
A study of the attention abnormality in trojaned berts
Trojan attacks raise serious security concerns. In this paper, we investigate the underlying
mechanism of Trojaned BERT models. We observe the attention focus drifting behavior of …
mechanism of Trojaned BERT models. We observe the attention focus drifting behavior of …
Defenses in adversarial machine learning: A survey
Adversarial phenomenon has been widely observed in machine learning (ML) systems,
especially in those using deep neural networks, describing that ML systems may produce …
especially in those using deep neural networks, describing that ML systems may produce …
Backdoor attack and defense in federated generative adversarial network-based medical image synthesis
R Jin, X Li - Medical Image Analysis, 2023 - Elsevier
Deep Learning-based image synthesis techniques have been applied in healthcare
research for generating medical images to support open research and augment medical …
research for generating medical images to support open research and augment medical …