A comprehensive survey on graph anomaly detection with deep learning

X Ma, J Wu, S Xue, J Yang, C Zhou… - … on Knowledge and …, 2021 - ieeexplore.ieee.org
Anomalies are rare observations (eg, data records or events) that deviate significantly from
the others in the sample. Over the past few decades, research on anomaly mining has …

A comprehensive survey on deep graph representation learning

W Ju, Z Fang, Y Gu, Z Liu, Q Long, Z Qiao, Y Qin… - Neural Networks, 2024 - Elsevier
Graph representation learning aims to effectively encode high-dimensional sparse graph-
structured data into low-dimensional dense vectors, which is a fundamental task that has …

Fltrust: Byzantine-robust federated learning via trust bootstrapping

X Cao, M Fang, J Liu, NZ Gong - arXiv preprint arXiv:2012.13995, 2020 - arxiv.org
Byzantine-robust federated learning aims to enable a service provider to learn an accurate
global model when a bounded number of clients are malicious. The key idea of existing …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST Xia - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Reflection backdoor: A natural backdoor attack on deep neural networks

Y Liu, X Ma, J Bailey, F Lu - Computer Vision–ECCV 2020: 16th European …, 2020 - Springer
Recent studies have shown that DNNs can be compromised by backdoor attacks crafted at
training time. A backdoor attack installs a backdoor into the victim model by injecting a …

A comprehensive survey on trustworthy graph neural networks: Privacy, robustness, fairness, and explainability

E Dai, T Zhao, H Zhu, J Xu, Z Guo, H Liu, J Tang… - arXiv preprint arXiv …, 2022 - arxiv.org
Graph Neural Networks (GNNs) have made rapid developments in the recent years. Due to
their great ability in modeling graph-structured data, GNNs are vastly used in various …

Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning

J Jia, Y Liu, NZ Gong - 2022 IEEE Symposium on Security and …, 2022 - ieeexplore.ieee.org
Self-supervised learning in computer vision aims to pre-train an image encoder using a
large amount of unlabeled images or (image, text) pairs. The pre-trained image encoder can …

Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses

M Goldblum, D Tsipras, C Xie, X Chen… - … on Pattern Analysis …, 2022 - ieeexplore.ieee.org
As machine learning systems grow in scale, so do their training data requirements, forcing
practitioners to automate and outsource the curation of training data in order to achieve state …

Deepsweep: An evaluation framework for mitigating DNN backdoor attacks using data augmentation

H Qiu, Y Zeng, S Guo, T Zhang, M Qiu… - Proceedings of the …, 2021 - dl.acm.org
Public resources and services (eg, datasets, training platforms, pre-trained models) have
been widely adopted to ease the development of Deep Learning-based applications …