Unveiling the Threat: Investigating Distributed and Centralized Backdoor Attacks in Federated Graph Neural Networks

J Xu, S Koffas, S Picek - Digital Threats: Research and Practice, 2024 - dl.acm.org
Graph neural networks (GNNs) have gained significant popularity as powerful deep learning
methods for processing graph data. However, centralized GNNs face challenges in data …

Cross-Context Backdoor Attacks against Graph Prompt Learning

X Lyu, Y Han, W Wang, H Qian, I Tsang… - arXiv preprint arXiv …, 2024 - arxiv.org
Graph Prompt Learning (GPL) bridges significant disparities between pretraining and
downstream applications to alleviate the knowledge transfer bottleneck in real-world graph …

Reinforcement learning-based black-box evasion attacks to link prediction in dynamic graphs

H Fan, B Wang, P Zhou, A Li, Z Xu, C Fu… - 2021 IEEE 23rd Int …, 2021 - ieeexplore.ieee.org
Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse
applications such as online recommendations, studies on disease contagion, organizational …

Defending against backdoor attack on graph nerual network by explainability

B Jiang, Z Li - arXiv preprint arXiv:2209.02902, 2022 - arxiv.org
Backdoor attack is a powerful attack algorithm to deep learning model. Recently, GNN's
vulnerability to backdoor attack has been proved especially on graph classification task. In …

Foobar: Fault fooling backdoor attack on neural network training

J Breier, X Hou, M Ochoa… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Neural network implementations are known to be vulnerable to physical attack vectors such
as fault injection attacks. As of now, these attacks were only utilized during the inference …

Ex-ray: Distinguishing injected backdoor from natural features in neural networks by examining differential feature symmetry

Y Liu, G Shen, G Tao, Z Wang, S Ma… - arXiv preprint arXiv …, 2021 - arxiv.org
Backdoor attack injects malicious behavior to models such that inputs embedded with
triggers are misclassified to a target label desired by the attacker. However, natural features …

A semantic backdoor attack against graph convolutional networks

J Dai, Z Xiong, C Cao - Neurocomputing, 2024 - Elsevier
Graph convolutional networks (GCNs) have been very effective in addressing the issue of
various graph-structured related tasks, such as node classification and graph classification …

EGC2: Enhanced graph classification with easy graph compression

J Chen, H Xiong, H Zheng, D Zhang, J Zhang, M Jia… - Information …, 2023 - Elsevier
Graph classification is crucial in network analyses. Networks face potential security threats,
such as adversarial attacks. Some defense methods may trade off the algorithm complexity …

MDTD: A Multi-Domain Trojan Detector for Deep Neural Networks

A Rajabi, S Asokraj, F Jiang, L Niu… - Proceedings of the …, 2023 - dl.acm.org
Machine learning models that use deep neural networks (DNNs) are vulnerable to backdoor
attacks. An adversary carrying out a backdoor attack embeds a predefined perturbation …

Dyn-backdoor: Backdoor attack on dynamic link prediction

J Chen, H Xiong, H Zheng, J Zhang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Dynamic link prediction (DLP) makes graph prediction based on historical information. Since
most DLP methods are highly dependent on the training data to achieve satisfying prediction …