Cooperative fixed-time/finite-time distributed robust optimization of multi-agent systems
M Firouzbahrami, A Nobakhti - Automatica, 2022 - Elsevier
A new robust continuous-time optimization algorithm for distributed problems is presented
which guarantees fixed-time convergence. The algorithm is based on a Lyapunov function …
which guarantees fixed-time convergence. The algorithm is based on a Lyapunov function …
Aflguard: Byzantine-robust asynchronous federated learning
Federated learning (FL) is an emerging machine learning paradigm, in which clients jointly
learn a model with the help of a cloud server. A fundamental challenge of FL is that the …
learn a model with the help of a cloud server. A fundamental challenge of FL is that the …
Online distributed nonconvex optimization with stochastic objective functions: High probability bound analysis of dynamic regrets
In this paper, the problem of online distributed optimization with stochastic and nonconvex
objective functions is studied by employing a multi-agent system. When making decisions …
objective functions is studied by employing a multi-agent system. When making decisions …
SF-CABD: Secure Byzantine fault tolerance federated learning on Non-IID data
X Lin, Y Li, X Xie, Y Ding, X Wu, C Ge - Knowledge-Based Systems, 2024 - Elsevier
Federated learning facilitates collaborative learning among multiple parties while ensuring
client privacy. The vulnerability of federated learning to diverse Byzantine attacks stems from …
client privacy. The vulnerability of federated learning to diverse Byzantine attacks stems from …
Secure distributed optimization under gradient attacks
S Yu, S Kar - IEEE Transactions on Signal Processing, 2023 - ieeexplore.ieee.org
In this article, we study secure distributed optimization against arbitrary gradient attacks in
multi-agent networks. In distributed optimization, there is no central server to coordinate …
multi-agent networks. In distributed optimization, there is no central server to coordinate …
Online distributed optimization with strongly pseudoconvex-sum cost functions and coupled inequality constraints
In this paper, the problem of online distributed optimization with coupled inequality
constraints is studied by employing multi-agent systems. Each agent only has access to the …
constraints is studied by employing multi-agent systems. Each agent only has access to the …
Byzantine-resilient Federated Learning Employing Normalized Gradients on Non-IID Datasets
In practical federated learning (FL) systems, the presence of malicious Byzantine attacks
and data heterogeneity often introduces biases into the learning process. However, existing …
and data heterogeneity often introduces biases into the learning process. However, existing …
Distributed Active Client Selection With Noisy Clients Using Model Association Scores
KI Kim - European Conference on Computer Vision, 2025 - Springer
Active client selection (ACS) strategically identifies clients for model updates during each
training round of federated learning. In scenarios with limited communication resources …
training round of federated learning. In scenarios with limited communication resources …
Communication-efficient federated learning using censored heavy ball descent
Distributed machine learning enables scalability and computational offloading, but requires
significant levels of communication. Consequently, communication efficiency in distributed …
significant levels of communication. Consequently, communication efficiency in distributed …
Online Optimization Under Randomly Corrupted Attacks
Existing algorithms in online optimization usually rely on trustful information, eg, reliable
knowledge of gradients, which makes them vulnerable to attacks. To take into account the …
knowledge of gradients, which makes them vulnerable to attacks. To take into account the …