Randomized gradient-free methods in convex optimization

A Gasnikov, D Dvinskikh, P Dvurechensky… - Encyclopedia of …, 2023 - Springer
Consider a convex optimization problem min x∈ Q⊆ Rd f (x)(1) with convex feasible set Q
and convex objective f possessing the zeroth-order (gradient/derivativefree) oracle [83]. The …

Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs

A Lobanov, A Veprikov, G Konin, A Beznosikov… - Computational …, 2023 - Springer
Distributed optimization has a rich history. It has demonstrated its effectiveness in many
machine learning applications, etc. In this paper we study a subclass of distributed …

Highly smooth zeroth-order methods for solving optimization problems under the PL condition

AV Gasnikov, AV Lobanov, FS Stonyakin - … Mathematics and Mathematical …, 2024 - Springer
In this paper, we study the black box optimization problem under the Polyak–Lojasiewicz
(PL) condition, assuming that the objective function is not just smooth, but has higher …

On Some Works of Boris Teodorovich Polyak on the Convergence of Gradient Methods and Their Development

SS Ablaev, AN Beznosikov, AV Gasnikov… - Computational …, 2024 - Springer
The paper presents a review of the current state of subgradient and accelerated convex
optimization methods, including the cases with the presence of noise and access to various …

Improved Iteration Complexity in Black-Box Optimization Problems under Higher Order Smoothness Function Condition

A Lobanov - arXiv preprint arXiv:2407.03507, 2024 - arxiv.org
This paper is devoted to the study (common in many applications) of the black-box
optimization problem, where the black-box represents a gradient-free oracle $\tilde {f}= f …

Contextual Continuum Bandits: Static Versus Dynamic Regret

A Akhavan, K Lounici, M Pontil… - arXiv preprint arXiv …, 2024 - arxiv.org
We study the contextual continuum bandits problem, where the learner sequentially receives
a side information vector and has to choose an action in a convex set, minimizing a function …

Estimating the minimizer and the minimum value of a regression function under passive design

A Akhavan, D Gogolashvili, AB Tsybakov - Journal of Machine Learning …, 2024 - jmlr.org
We propose a new method for estimating the minimizer $\boldsymbol {x}^* $ and the
minimum value $ f^* $ of a smooth and strongly convex regression function $ f $ from the …

Polyak's Method Based on the Stochastic Lyapunov Function for Justifying the Consistency of Estimates Produced by a Stochastic Approximation Search Algorithm …

ON Granichin, YV Ivanskii, KD Kopylova - … Mathematics and Mathematical …, 2024 - Springer
Abstract In 1976–1977, Polyak published in the journal Avtomatica i Telemekhanika
(Automation and Remote Control) two remarkable papers on how to study the properties of …

GRADIENT-FREE ALGORITHMS FOR SOLVING STOCHASTIC SADDLE OPTIMIZATION PROBLEMS WITH THE POLYAK–LOYASIEVICH CONDITION

SI Sadykov, AV Lobanov… - …, 2023 - journals.rcsi.science
This paper focuses on solving a subclass of a stochastic nonconvex-concave black box
optimization problem with a saddle point that satisfies the Polyak–Loyasievich condition. To …