Adversarial attacks and defenses in machine learning-empowered communication systems and networks: A contemporary survey

Y Wang, T Sun, S Li, X Yuan, W Ni… - … Surveys & Tutorials, 2023 - ieeexplore.ieee.org
Adversarial attacks and defenses in machine learning and deep neural network (DNN) have
been gaining significant attention due to the rapidly growing applications of deep learning in …

Improving the transferability of adversarial samples by path-augmented method

J Zhang, J Huang, W Wang, Y Li… - Proceedings of the …, 2023 - openaccess.thecvf.com
Deep neural networks have achieved unprecedented success on diverse vision tasks.
However, they are vulnerable to adversarial noise that is imperceptible to humans. This …

Knowledge distillation improves graph structure augmentation for graph neural networks

L Wu, H Lin, Y Huang, SZ Li - Advances in Neural …, 2022 - proceedings.neurips.cc
Graph (structure) augmentation aims to perturb the graph structure through heuristic or
probabilistic rules, enabling the nodes to capture richer contextual information and thus …

Extracting low-/high-frequency knowledge from graph neural networks and injecting it into mlps: An effective gnn-to-mlp distillation framework

L Wu, H Lin, Y Huang, T Fan, SZ Li - … of the AAAI Conference on Artificial …, 2023 - ojs.aaai.org
Recent years have witnessed the great success of Graph Neural Networks (GNNs) in
handling graph-related tasks. However, MLPs remain the primary workhorse for practical …

Towards reasonable budget allocation in untargeted graph structure attacks via gradient debias

Z Liu, Y Luo, L Wu, Z Liu, SZ Li - arXiv preprint arXiv:2304.00010, 2023 - arxiv.org
It has become cognitive inertia to employ cross-entropy loss function in classification related
tasks. In the untargeted attacks on graph structure, the gradients derived from the attack …

Feature‐Based Graph Backdoor Attack in the Node Classification Task

Y Chen, Z Ye, H Zhao, Y Wang - International Journal of …, 2023 - Wiley Online Library
Graph neural networks (GNNs) have shown significant performance in various practical
applications due to their strong learning capabilities. Backdoor attacks are a type of attack …

Imperceptible graph injection attack on graph neural networks

Y Chen, Z Ye, Z Wang, H Zhao - Complex & Intelligent Systems, 2024 - Springer
Abstract In recent years, Graph Neural Networks (GNNs) have achieved excellent
applications in classification or prediction tasks. Recent studies have demonstrated that …

Safety in Graph Machine Learning: Threats and Safeguards

S Wang, Y Dong, B Zhang, Z Chen, X Fu, Y He… - arXiv preprint arXiv …, 2024 - arxiv.org
Graph Machine Learning (Graph ML) has witnessed substantial advancements in recent
years. With their remarkable ability to process graph-structured data, Graph ML techniques …

Learning to augment graph structure for both homophily and heterophily graphs

L Wu, C Tan, Z Liu, Z Gao, H Lin, SZ Li - Joint European Conference on …, 2023 - Springer
Recent years have witnessed great successes in performing graph structure learning for
Graph Neural Networks (GNNs). However, comparatively little work studies structure …

A Black-box Adversarial Attack Method via Nesterov Accelerated Gradient and Rewiring Towards Attacking Graph Neural Networks

S Zhao, W Wang, Z Du, J Chen… - IEEE Transactions on Big …, 2023 - ieeexplore.ieee.org
Recent studies have shown that Graph Neural Networks (GNNs) are vulnerable to well-
designed and imperceptible adversarial attack. Attacks utilizing gradient information are …