Securing distributed network digital twin systems against model poisoning attacks

Z Zhang, M Fang, M Chen, G Li, X Lin… - IEEE Internet of Things …, 2024 - ieeexplore.ieee.org
In the era of 5G and beyond, the increasing complexity of wireless networks necessitates
innovative frameworks for efficient management and deployment. Digital twins (DTs) …

Decaf: Data distribution decompose attack against federated learning

Z Dai, Y Gao, C Zhou, A Fu, Z Zhang… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
In contrast to prevalent Federated Learning (FL) privacy inference techniques such as
generative adversarial networks attacks, membership inference attacks, property inference …

LoBAM: LoRA-Based Backdoor Attack on Model Merging

M Yin, J Zhang, J Sun, M Fang, H Li, Y Chen - arXiv preprint arXiv …, 2024 - arxiv.org
Model merging is an emerging technique that integrates multiple models fine-tuned on
different tasks to create a versatile model that excels in multiple domains. This scheme, in …

Gradient Purification: Defense Against Poisoning Attack in Decentralized Federated Learning

B Li, X Miao, Y Shang, X Zhao, S Deng… - arXiv preprint arXiv …, 2025 - arxiv.org
Decentralized federated learning (DFL) is inherently vulnerable to poisoning attacks, as
malicious clients can transmit manipulated model gradients to neighboring clients. Existing …

On the Hardness of Decentralized Multi-Agent Policy Evaluation under Byzantine Attacks

M Fang, Z Zhang, A Velasquez… - 2024 22nd International …, 2024 - ieeexplore.ieee.org
In this paper, we study a fully-decentralized multi-agent policy evaluation problem, which is
an important sub-problem in cooperative multi-agent reinforcement learning, in the presence …