A theoretical and empirical comparison of gradient approximations in derivative-free optimization

AS Berahas, L Cao, K Choromanski… - Foundations of …, 2022 - Springer
In this paper, we analyze several methods for approximating gradients of noisy functions
using only function values. These methods include finite differences, linear interpolation …

Gradients without backpropagation

AG Baydin, BA Pearlmutter, D Syme, F Wood… - arXiv preprint arXiv …, 2022 - arxiv.org
Using backpropagation to compute gradients of objective functions for optimization has
remained a mainstay of machine learning. Backpropagation, or reverse-mode differentiation …

Recent theoretical advances in non-convex optimization

M Danilova, P Dvurechensky, A Gasnikov… - … and Probability: With a …, 2022 - Springer
Motivated by recent increased interest in optimization algorithms for non-convex
optimization in application to training deep neural networks and other optimization problems …

The power of first-order smooth optimization for black-box non-smooth problems

A Gasnikov, A Novitskii, V Novitskii… - arXiv preprint arXiv …, 2022 - arxiv.org
Gradient-free/zeroth-order methods for black-box convex optimization have been
extensively studied in the last decade with the main focus on oracle calls complexity. In this …

Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems

D Dvinskikh, A Gasnikov - Journal of Inverse and Ill-posed Problems, 2021 - degruyter.com
We introduce primal and dual stochastic gradient oracle methods for decentralized convex
optimization problems. Both for primal and dual oracles, the proposed methods are optimal …

Randomized gradient-free methods in convex optimization

A Gasnikov, D Dvinskikh, P Dvurechensky… - Encyclopedia of …, 2023 - Springer
Consider a convex optimization problem min x∈ Q⊆ Rd f (x)(1) with convex feasible set Q
and convex objective f possessing the zeroth-order (gradient/derivativefree) oracle [83]. The …

Gradient methods for problems with inexact model of the objective

FS Stonyakin, D Dvinskikh, P Dvurechensky… - … Optimization Theory and …, 2019 - Springer
We consider optimization methods for convex minimization problems under inexact
information on the objective function. We introduce inexact model of the objective, which as …

An accelerated method for decentralized distributed stochastic optimization over time-varying graphs

A Rogozin, M Bochko, P Dvurechensky… - 2021 60th IEEE …, 2021 - ieeexplore.ieee.org
We consider a distributed stochastic optimization problem that is solved by a decentralized
network of agents with only local communication between neighboring agents. The goal of …

Gradient-free methods with inexact oracle for convex-concave stochastic saddle-point problem

A Beznosikov, A Sadiev, A Gasnikov - International Conference on …, 2020 - Springer
In the paper, we generalize the approach Gasnikov et al. 2017, which allows to solve
(stochastic) convex optimization problems with an inexact gradient-free oracle, to the convex …

Solving smooth min-min and min-max problems by mixed oracle algorithms

E Gladin, A Sadiev, A Gasnikov… - … Optimization Theory and …, 2021 - Springer
In this paper, we consider two types of problems that have some similarity in their structure,
namely, min-min problems and min-max saddle-point problems. Our approach is based on …