Introduction to online convex optimization
E Hazan - Foundations and Trends® in Optimization, 2016 - nowpublishers.com
This monograph portrays optimization as a process. In many practical applications the
environment is so complex that it is infeasible to lay out a comprehensive theoretical model …
environment is so complex that it is infeasible to lay out a comprehensive theoretical model …
Conditional gradient methods
G Braun, A Carderera, CW Combettes… - arXiv preprint arXiv …, 2022 - arxiv.org
The purpose of this survey is to serve both as a gentle introduction and a coherent overview
of state-of-the-art Frank--Wolfe algorithms, also called conditional gradient algorithms, for …
of state-of-the-art Frank--Wolfe algorithms, also called conditional gradient algorithms, for …
Distributed online convex optimization with an aggregative variable
This article investigates distributed online convex optimization in the presence of an
aggregative variable without any global/central coordinators over a multiagent network. In …
aggregative variable without any global/central coordinators over a multiagent network. In …
Faster projection-free online learning
E Hazan, E Minasyan - Conference on Learning Theory, 2020 - proceedings.mlr.press
In many online learning problems the computational bottleneck for gradient-based methods
is the projection operation. For this reason, in many problems the most efficient algorithms …
is the projection operation. For this reason, in many problems the most efficient algorithms …
Cautious regret minimization: Online optimization with long-term budget constraints
N Liakopoulos, A Destounis… - International …, 2019 - proceedings.mlr.press
We study a class of online convex optimization problems with long-term budget constraints
that arise naturally as reliability guarantees or total consumption constraints. In this general …
that arise naturally as reliability guarantees or total consumption constraints. In this general …
New projection-free algorithms for online convex optimization with adaptive regret guarantees
We present new efficient\emph {projection-free} algorithms for online convex optimization
(OCO), where by projection-free we refer to algorithms that avoid computing orthogonal …
(OCO), where by projection-free we refer to algorithms that avoid computing orthogonal …
Online continuous submodular maximization: From full-information to bandit feedback
In this paper, we propose three online algorithms for submodular maximization. The first
one, Mono-Frank-Wolfe, reduces the number of per-function gradient evaluations from …
one, Mono-Frank-Wolfe, reduces the number of per-function gradient evaluations from …
Online learning via offline greedy algorithms: Applications in market design and optimization
Motivated by online decision-making in time-varying combinatorial environments, we study
the problem of transforming offline algorithms to their online counterparts. We focus on …
the problem of transforming offline algorithms to their online counterparts. We focus on …
Learning pruning-friendly networks via frank-wolfe: One-shot, any-sparsity, and no retraining
We present a novel framework to train a large deep neural network (DNN) for only $\textit
{once} $, which can then be pruned to $\textit {any sparsity ratio} $ to preserve competitive …
{once} $, which can then be pruned to $\textit {any sparsity ratio} $ to preserve competitive …
Stochastic continuous submodular maximization: Boosting via non-oblivious function
In this paper, we revisit Stochastic Continuous Submodular Maximization in both offline and
online settings, which can benefit wide applications in machine learning and operations …
online settings, which can benefit wide applications in machine learning and operations …