The power of first-order smooth optimization for black-box non-smooth problems
A Gasnikov, A Novitskii, V Novitskii… - arXiv preprint arXiv …, 2022 - arxiv.org
Gradient-free/zeroth-order methods for black-box convex optimization have been
extensively studied in the last decade with the main focus on oracle calls complexity. In this …
extensively studied in the last decade with the main focus on oracle calls complexity. In this …
A Damped Newton Method Achieves Global and Local Quadratic Convergence Rate
S Hanzely, D Kamzolov… - Advances in …, 2022 - proceedings.neurips.cc
In this paper, we present the first stepsize schedule for Newton method resulting in fast
global and local convergence guarantees. In particular, we a) prove an $\mathcal O\left …
global and local convergence guarantees. In particular, we a) prove an $\mathcal O\left …
Cubic regularized subspace Newton for non-convex optimization
This paper addresses the optimization problem of minimizing non-convex continuous
functions, which is relevant in the context of high-dimensional machine learning applications …
functions, which is relevant in the context of high-dimensional machine learning applications …
Advancing the lower bounds: An accelerated, stochastic, second-order method with optimal adaptation to inexactness
We present a new accelerated stochastic second-order method that is robust to both
gradient and Hessian inexactness, which occurs typically in machine learning. We establish …
gradient and Hessian inexactness, which occurs typically in machine learning. We establish …
Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization
Statistical preconditioning enables fast methods for distributed large-scale empirical risk
minimization problems. In this approach, multiple worker nodes compute gradients in …
minimization problems. In this approach, multiple worker nodes compute gradients in …
Improving Stochastic Cubic Newton with Momentum
We study stochastic second-order methods for solving general non-convex optimization
problems. We propose using a special version of momentum to stabilize the stochastic …
problems. We propose using a special version of momentum to stabilize the stochastic …
Acceleration exists! optimization problems when oracle can only compare objective function values
A Lobanov, A Gasnikov, A Krasnov - The Thirty-eighth Annual …, 2024 - openreview.net
Frequently, the burgeoning field of black-box optimization encounters challenges due to a
limited understanding of the mechanisms of the objective function. To address such …
limited understanding of the mechanisms of the objective function. To address such …
A stochastic objective-function-free adaptive regularization method with optimal complexity
A fully stochastic second-order adaptive-regularization method for unconstrained nonconvex
optimization is presented which never computes the objective-function value, but yet …
optimization is presented which never computes the objective-function value, but yet …
[PDF][PDF] Accelerated adaptive cubic regularized quasi-newton methods
In this paper, we propose Cubic Regularized Quasi-Newton Methods for (strongly)
starconvex and Accelerated Cubic Regularized Quasi-Newton for convex optimization. The …
starconvex and Accelerated Cubic Regularized Quasi-Newton for convex optimization. The …
Exploiting Higher Order Derivatives in Convex Optimization Methods
It is well known since the works of Newton [64] and Kantorovich [45] that the second-order
derivative of the objective function can be used in numerical algorithms for solving …
derivative of the objective function can be used in numerical algorithms for solving …