Introduction to online convex optimization

E Hazan - Foundations and Trends® in Optimization, 2016 - nowpublishers.com
This monograph portrays optimization as a process. In many practical applications the
environment is so complex that it is infeasible to lay out a comprehensive theoretical model …

Conditional gradient methods

G Braun, A Carderera, CW Combettes… - arXiv preprint arXiv …, 2022 - arxiv.org
The purpose of this survey is to serve both as a gentle introduction and a coherent overview
of state-of-the-art Frank--Wolfe algorithms, also called conditional gradient algorithms, for …

Distributed online convex optimization with an aggregative variable

X Li, X Yi, L Xie - IEEE Transactions on Control of Network …, 2021 - ieeexplore.ieee.org
This article investigates distributed online convex optimization in the presence of an
aggregative variable without any global/central coordinators over a multiagent network. In …

Faster projection-free online learning

E Hazan, E Minasyan - Conference on Learning Theory, 2020 - proceedings.mlr.press
In many online learning problems the computational bottleneck for gradient-based methods
is the projection operation. For this reason, in many problems the most efficient algorithms …

Cautious regret minimization: Online optimization with long-term budget constraints

N Liakopoulos, A Destounis… - International …, 2019 - proceedings.mlr.press
We study a class of online convex optimization problems with long-term budget constraints
that arise naturally as reliability guarantees or total consumption constraints. In this general …

New projection-free algorithms for online convex optimization with adaptive regret guarantees

D Garber, B Kretzu - Conference on Learning Theory, 2022 - proceedings.mlr.press
We present new efficient\emph {projection-free} algorithms for online convex optimization
(OCO), where by projection-free we refer to algorithms that avoid computing orthogonal …

Online continuous submodular maximization: From full-information to bandit feedback

M Zhang, L Chen, H Hassani… - Advances in Neural …, 2019 - proceedings.neurips.cc
In this paper, we propose three online algorithms for submodular maximization. The first
one, Mono-Frank-Wolfe, reduces the number of per-function gradient evaluations from …

Online learning via offline greedy algorithms: Applications in market design and optimization

R Niazadeh, N Golrezaei, JR Wang, F Susan… - Proceedings of the …, 2021 - dl.acm.org
Motivated by online decision-making in time-varying combinatorial environments, we study
the problem of transforming offline algorithms to their online counterparts. We focus on …

Learning pruning-friendly networks via frank-wolfe: One-shot, any-sparsity, and no retraining

M Lu, X Luo, T Chen, W Chen, D Liu… - … Conference on Learning …, 2022 - openreview.net
We present a novel framework to train a large deep neural network (DNN) for only $\textit
{once} $, which can then be pruned to $\textit {any sparsity ratio} $ to preserve competitive …

Stochastic continuous submodular maximization: Boosting via non-oblivious function

Q Zhang, Z Deng, Z Chen, H Hu… - … on Machine Learning, 2022 - proceedings.mlr.press
In this paper, we revisit Stochastic Continuous Submodular Maximization in both offline and
online settings, which can benefit wide applications in machine learning and operations …