Painless stochastic gradient: Interpolation, line-search, and convergence rates
Recent works have shown that stochastic gradient descent (SGD) achieves the fast
convergence rates of full-batch gradient descent for over-parameterized models satisfying …
convergence rates of full-batch gradient descent for over-parameterized models satisfying …
[HTML][HTML] Convergence of sequences: A survey
B Franci, S Grammatico - Annual Reviews in Control, 2022 - Elsevier
Convergent sequences of real numbers play a fundamental role in many different problems
in system theory, eg, in Lyapunov stability analysis, as well as in optimization theory and …
in system theory, eg, in Lyapunov stability analysis, as well as in optimization theory and …
[图书][B] Uncertainty quantification in variational inequalities: theory, numerics, and applications
Uncertainty Quantification (UQ) is an emerging and extremely active research discipline
which aims to quantitatively treat any uncertainty in applied models. The primary objective of …
which aims to quantitatively treat any uncertainty in applied models. The primary objective of …
Adaptive, doubly optimal no-regret learning in strongly monotone and exp-concave games with gradient feedback
Online gradient descent (OGD) is well-known to be doubly optimal under strong convexity or
monotonicity assumptions:(1) in the single-agent setting, it achieves an optimal regret of Θ …
monotonicity assumptions:(1) in the single-agent setting, it achieves an optimal regret of Θ …
Inexact model: A framework for optimization and variational inequalities
F Stonyakin, A Tyurin, A Gasnikov… - Optimization Methods …, 2021 - Taylor & Francis
In this paper, we propose a general algorithmic framework for the first-order methods in
optimization in a broad sense, including minimization problems, saddle-point problems and …
optimization in a broad sense, including minimization problems, saddle-point problems and …
Randomized Lagrangian stochastic approximation for large-scale constrained stochastic Nash games
In this paper, we consider stochastic monotone Nash games where each player's strategy
set is characterized by possibly a large number of explicit convex constraint inequalities …
set is characterized by possibly a large number of explicit convex constraint inequalities …
A distributed forward–backward algorithm for stochastic generalized Nash equilibrium seeking
B Franci, S Grammatico - IEEE Transactions on Automatic …, 2020 - ieeexplore.ieee.org
We consider the stochastic generalized Nash equilibrium problem (SGNEP) with expected-
value cost functions. Inspired by Yi and Pavel (2019), we propose a distributed generalized …
value cost functions. Inspired by Yi and Pavel (2019), we propose a distributed generalized …
Sifting through the noise: Universal first-order methods for stochastic variational inequalities
We examine a flexible algorithmic framework for solving monotone variational inequalities in
the presence of randomness and uncertainty. The proposed template encompasses a wide …
the presence of randomness and uncertainty. The proposed template encompasses a wide …
Optimal algorithms for differentially private stochastic monotone variational inequalities and saddle-point problems
D Boob, C Guzmán - Mathematical Programming, 2024 - Springer
In this work, we conduct the first systematic study of stochastic variational inequality (SVI)
and stochastic saddle point (SSP) problems under the constraint of differential privacy (DP) …
and stochastic saddle point (SSP) problems under the constraint of differential privacy (DP) …
Variable sample-size optimistic mirror descent algorithm for stochastic mixed variational inequalities
ZP Yang, Y Zhao, GH Lin - Journal of Global Optimization, 2024 - Springer
In this paper, we propose a variable sample-size optimistic mirror descent algorithm under
the Bregman distance for a class of stochastic mixed variational inequalities. Different from …
the Bregman distance for a class of stochastic mixed variational inequalities. Different from …