Unveiling the Threat: Investigating Distributed and Centralized Backdoor Attacks in Federated Graph Neural Networks
Graph neural networks (GNNs) have gained significant popularity as powerful deep learning
methods for processing graph data. However, centralized GNNs face challenges in data …
methods for processing graph data. However, centralized GNNs face challenges in data …
Cross-Context Backdoor Attacks against Graph Prompt Learning
Graph Prompt Learning (GPL) bridges significant disparities between pretraining and
downstream applications to alleviate the knowledge transfer bottleneck in real-world graph …
downstream applications to alleviate the knowledge transfer bottleneck in real-world graph …
Reinforcement learning-based black-box evasion attacks to link prediction in dynamic graphs
Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse
applications such as online recommendations, studies on disease contagion, organizational …
applications such as online recommendations, studies on disease contagion, organizational …
Defending against backdoor attack on graph nerual network by explainability
B Jiang, Z Li - arXiv preprint arXiv:2209.02902, 2022 - arxiv.org
Backdoor attack is a powerful attack algorithm to deep learning model. Recently, GNN's
vulnerability to backdoor attack has been proved especially on graph classification task. In …
vulnerability to backdoor attack has been proved especially on graph classification task. In …
Foobar: Fault fooling backdoor attack on neural network training
Neural network implementations are known to be vulnerable to physical attack vectors such
as fault injection attacks. As of now, these attacks were only utilized during the inference …
as fault injection attacks. As of now, these attacks were only utilized during the inference …
Ex-ray: Distinguishing injected backdoor from natural features in neural networks by examining differential feature symmetry
Backdoor attack injects malicious behavior to models such that inputs embedded with
triggers are misclassified to a target label desired by the attacker. However, natural features …
triggers are misclassified to a target label desired by the attacker. However, natural features …
A semantic backdoor attack against graph convolutional networks
J Dai, Z Xiong, C Cao - Neurocomputing, 2024 - Elsevier
Graph convolutional networks (GCNs) have been very effective in addressing the issue of
various graph-structured related tasks, such as node classification and graph classification …
various graph-structured related tasks, such as node classification and graph classification …
EGC2: Enhanced graph classification with easy graph compression
Graph classification is crucial in network analyses. Networks face potential security threats,
such as adversarial attacks. Some defense methods may trade off the algorithm complexity …
such as adversarial attacks. Some defense methods may trade off the algorithm complexity …
MDTD: A Multi-Domain Trojan Detector for Deep Neural Networks
Machine learning models that use deep neural networks (DNNs) are vulnerable to backdoor
attacks. An adversary carrying out a backdoor attack embeds a predefined perturbation …
attacks. An adversary carrying out a backdoor attack embeds a predefined perturbation …
Dyn-backdoor: Backdoor attack on dynamic link prediction
J Chen, H Xiong, H Zheng, J Zhang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Dynamic link prediction (DLP) makes graph prediction based on historical information. Since
most DLP methods are highly dependent on the training data to achieve satisfying prediction …
most DLP methods are highly dependent on the training data to achieve satisfying prediction …