[图书][B] Understanding machine learning: From theory to algorithms

S Shalev-Shwartz, S Ben-David - 2014 - books.google.com
Machine learning is one of the fastest growing areas of computer science, with far-reaching
applications. The aim of this textbook is to introduce machine learning, and the algorithmic …

Online learning and online convex optimization

S Shalev-Shwartz - Foundations and Trends® in Machine …, 2012 - nowpublishers.com
Online learning is a well established learning paradigm which has both theoretical and
practical appeals. The goal of online learning is to make a sequence of accurate predictions …

[PDF][PDF] Adaptive subgradient methods for online learning and stochastic optimization.

J Duchi, E Hazan, Y Singer - Journal of machine learning research, 2011 - jmlr.org
We present a new family of subgradient methods that dynamically incorporate knowledge of
the geometry of the data observed in earlier iterations to perform more informative gradient …

Adaptive online learning in dynamic environments

L Zhang, S Lu, ZH Zhou - Advances in neural information …, 2018 - proceedings.neurips.cc
In this paper, we study online convex optimization in dynamic environments, and aim to
bound the dynamic regret with respect to any sequence of comparators. Existing work have …

Online optimization: Competing with dynamic comparators

A Jadbabaie, A Rakhlin… - Artificial Intelligence …, 2015 - proceedings.mlr.press
Recent literature on online learning has focused on developing adaptive algorithms that
take advantage of a regularity of the sequence of observations, yet retain worst-case …

Provable guarantees for gradient-based meta-learning

MF Balcan, M Khodak… - … Conference on Machine …, 2019 - proceedings.mlr.press
We study the problem of meta-learning through the lens of online convex optimization,
developing a meta-algorithm bridging the gap between popular gradient-based meta …

Dynamic regret of convex and smooth functions

P Zhao, YJ Zhang, L Zhang… - Advances in Neural …, 2020 - proceedings.neurips.cc
We investigate online convex optimization in non-stationary environments and choose the
dynamic regret as the performance measure, defined as the difference between cumulative …

Adaptive bound optimization for online convex optimization

HB McMahan, M Streeter - arXiv preprint arXiv:1002.4908, 2010 - arxiv.org
We introduce a new online convex optimization algorithm that adaptively chooses its
regularization function based on the loss functions observed so far. This is in contrast to …

Follow-the-regularized-leader and mirror descent: Equivalence theorems and l1 regularization

B McMahan - … of the Fourteenth International Conference on …, 2011 - proceedings.mlr.press
We prove that many mirror descent algorithms for online convex optimization (such as online
gradient descent) have an equivalent interpretation as follow-the-regularized-leader (FTRL) …

Online optimization with gradual variations

CK Chiang, T Yang, CJ Lee… - … on Learning Theory, 2012 - proceedings.mlr.press
We study the online convex optimization problem, in which an online algorithm has to make
repeated decisions with convex loss functions and hopes to achieve a small regret. We …