On the impact of machine learning randomness on group fairness

P Ganesh, H Chang, M Strobel, R Shokri - Proceedings of the 2023 ACM …, 2023 - dl.acm.org
Statistical measures for group fairness in machine learning reflect the gap in performance of
algorithms across different groups. These measures, however, exhibit a high variance …

Tighter lower bounds for shuffling SGD: Random permutations and beyond

J Cha, J Lee, C Yun - International Conference on Machine …, 2023 - proceedings.mlr.press
We study convergence lower bounds of without-replacement stochastic gradient descent
(SGD) for solving smooth (strongly-) convex finite-sum minimization problems. Unlike most …

Repeated random sampling for minimizing the time-to-accuracy of learning

P Okanovic, R Waleffe, V Mageirakos… - arXiv preprint arXiv …, 2023 - arxiv.org
Methods for carefully selecting or generating a small set of training data to learn from, ie,
data pruning, coreset selection, and data distillation, have been shown to be effective in …

Mini-Batch Optimization of Contrastive Loss

J Cho, K Sreenivasan, K Lee, K Mun, S Yi… - arXiv preprint arXiv …, 2023 - arxiv.org
Contrastive learning has gained significant attention as a method for self-supervised
learning. The contrastive loss function ensures that embeddings of positive sample pairs …

[PDF][PDF] Coordinating distributed example orders for provably accelerated training

AF Cooper, W Guo, K Pham, T Yuan… - Thirty-seventh …, 2023 - proceedings.neurips.cc
Recent research on online Gradient Balancing (GraB) has revealed that there exist
permutation-based example orderings for SGD that are guaranteed to outperform random …

Provably Faster Algorithms for Bilevel Optimization via Without-Replacement Sampling

J Li, H Huang - arXiv preprint arXiv:2411.05868, 2024 - arxiv.org
Bilevel Optimization has experienced significant advancements recently with the
introduction of new efficient algorithms. Mirroring the success in single-level optimization …

CD-GraB: Coordinating Distributed Example Orders for Provably Accelerated Training

AF Cooper, W Guo, DK Pham, T Yuan… - Advances in …, 2024 - proceedings.neurips.cc
Recent research on online Gradient Balancing (GraB) has revealed that there exist
permutation-based example orderings that are guaranteed to outperform random reshuffling …

Accelerating Federated Learning by Selecting Beneficial Herd of Local Gradients

P Luo, X Deng, Z Wen, T Sun, D Li - arXiv preprint arXiv:2403.16557, 2024 - arxiv.org
Federated Learning (FL) is a distributed machine learning framework in communication
network systems. However, the systems' Non-Independent and Identically Distributed (Non …

Stochastic optimization with arbitrary recurrent data sampling

WG Powell, H Lyu - arXiv preprint arXiv:2401.07694, 2024 - arxiv.org
For obtaining optimal first-order convergence guarantee for stochastic optimization, it is
necessary to use a recurrent data sampling algorithm that samples every data point with …

On the Last-Iterate Convergence of Shuffling Gradient Methods

Z Liu, Z Zhou - arXiv preprint arXiv:2403.07723, 2024 - arxiv.org
Shuffling gradient methods, which are also known as stochastic gradient descent (SGD)
without replacement, are widely implemented in practice, particularly including three popular …