Advancing the lower bounds: An accelerated, stochastic, second-order method with optimal adaptation to inexactness
We present a new accelerated stochastic second-order method that is robust to both
gradient and Hessian inexactness, which occurs typically in machine learning. We establish …
gradient and Hessian inexactness, which occurs typically in machine learning. We establish …
Improving Stochastic Cubic Newton with Momentum
We study stochastic second-order methods for solving general non-convex optimization
problems. We propose using a special version of momentum to stabilize the stochastic …
problems. We propose using a special version of momentum to stabilize the stochastic …
Diffusion Stochastic Optimization for Min-Max Problems
H Cai, SA Alghunaim, AH Sayed - arXiv preprint arXiv:2401.14585, 2024 - arxiv.org
The optimistic gradient method is useful in addressing minimax optimization problems.
Motivated by the observation that the conventional stochastic version suffers from the need …
Motivated by the observation that the conventional stochastic version suffers from the need …
Adaptive Quasi-Newton and anderson acceleration framework with explicit global (accelerated) convergence rates
D Scieur - … Conference on Artificial Intelligence and Statistics, 2024 - proceedings.mlr.press
Despite the impressive numerical performance of the quasi-Newton and Anderson/nonlinear
acceleration methods, their global convergence rates have remained elusive for over 50 …
acceleration methods, their global convergence rates have remained elusive for over 50 …
[PDF][PDF] Accelerated adaptive cubic regularized quasi-newton methods
In this paper, we propose Cubic Regularized Quasi-Newton Methods for (strongly)
starconvex and Accelerated Cubic Regularized Quasi-Newton for convex optimization. The …
starconvex and Accelerated Cubic Regularized Quasi-Newton for convex optimization. The …
Second-Order Min-Max Optimization with Lazy Hessians
This paper studies second-order methods for convex-concave minimax optimization.
Monteiro and Svaiter (2012) proposed a method to solve the problem with an optimal …
Monteiro and Svaiter (2012) proposed a method to solve the problem with an optimal …
OPTAMI: Global Superlinear Convergence of High-order Methods
Second-order methods for convex optimization outperform first-order methods in terms of
theoretical iteration convergence, achieving rates up to $ O (k^{-5}) $ for highly-smooth …
theoretical iteration convergence, achieving rates up to $ O (k^{-5}) $ for highly-smooth …
Fault Tolerant ML: Efficient Meta-Aggregation and Synchronous Training
In this paper, we investigate the challenging framework of Byzantine-robust training in
distributed machine learning (ML) systems, focusing on enhancing both efficiency and …
distributed machine learning (ML) systems, focusing on enhancing both efficiency and …
Inexact and Implementable Accelerated Newton Proximal Extragradient Method for Convex Optimization
Z Huang, B Jiang, Y Jiang - arXiv preprint arXiv:2402.11951, 2024 - arxiv.org
In this paper, we investigate the convergence behavior of the Accelerated Newton Proximal
Extragradient (A-NPE) method when employing inexact Hessian information. The exact A …
Extragradient (A-NPE) method when employing inexact Hessian information. The exact A …
Exploring Jacobian Inexactness in Second-Order Methods for Variational Inequalities: Lower Bounds, Optimal Algorithms and Quasi-Newton Approximations
Variational inequalities represent a broad class of problems, including minimization and min-
max problems, commonly found in machine learning. Existing second-order and high-order …
max problems, commonly found in machine learning. Existing second-order and high-order …