[图书][B] Understanding machine learning: From theory to algorithms
S Shalev-Shwartz, S Ben-David - 2014 - books.google.com
Machine learning is one of the fastest growing areas of computer science, with far-reaching
applications. The aim of this textbook is to introduce machine learning, and the algorithmic …
applications. The aim of this textbook is to introduce machine learning, and the algorithmic …
Online learning and online convex optimization
S Shalev-Shwartz - Foundations and Trends® in Machine …, 2012 - nowpublishers.com
Online learning is a well established learning paradigm which has both theoretical and
practical appeals. The goal of online learning is to make a sequence of accurate predictions …
practical appeals. The goal of online learning is to make a sequence of accurate predictions …
[PDF][PDF] Adaptive subgradient methods for online learning and stochastic optimization.
We present a new family of subgradient methods that dynamically incorporate knowledge of
the geometry of the data observed in earlier iterations to perform more informative gradient …
the geometry of the data observed in earlier iterations to perform more informative gradient …
Adaptive online learning in dynamic environments
In this paper, we study online convex optimization in dynamic environments, and aim to
bound the dynamic regret with respect to any sequence of comparators. Existing work have …
bound the dynamic regret with respect to any sequence of comparators. Existing work have …
Online optimization: Competing with dynamic comparators
A Jadbabaie, A Rakhlin… - Artificial Intelligence …, 2015 - proceedings.mlr.press
Recent literature on online learning has focused on developing adaptive algorithms that
take advantage of a regularity of the sequence of observations, yet retain worst-case …
take advantage of a regularity of the sequence of observations, yet retain worst-case …
Provable guarantees for gradient-based meta-learning
We study the problem of meta-learning through the lens of online convex optimization,
developing a meta-algorithm bridging the gap between popular gradient-based meta …
developing a meta-algorithm bridging the gap between popular gradient-based meta …
Dynamic regret of convex and smooth functions
We investigate online convex optimization in non-stationary environments and choose the
dynamic regret as the performance measure, defined as the difference between cumulative …
dynamic regret as the performance measure, defined as the difference between cumulative …
Adaptive bound optimization for online convex optimization
HB McMahan, M Streeter - arXiv preprint arXiv:1002.4908, 2010 - arxiv.org
We introduce a new online convex optimization algorithm that adaptively chooses its
regularization function based on the loss functions observed so far. This is in contrast to …
regularization function based on the loss functions observed so far. This is in contrast to …
Follow-the-regularized-leader and mirror descent: Equivalence theorems and l1 regularization
B McMahan - … of the Fourteenth International Conference on …, 2011 - proceedings.mlr.press
We prove that many mirror descent algorithms for online convex optimization (such as online
gradient descent) have an equivalent interpretation as follow-the-regularized-leader (FTRL) …
gradient descent) have an equivalent interpretation as follow-the-regularized-leader (FTRL) …
Online optimization with gradual variations
We study the online convex optimization problem, in which an online algorithm has to make
repeated decisions with convex loss functions and hopes to achieve a small regret. We …
repeated decisions with convex loss functions and hopes to achieve a small regret. We …