Competitive on‐line statistics
V Vovk - International Statistical Review, 2001 - Wiley Online Library
A radically new approach to statistical modelling, which combines mathematical techniques
of Bayesian statistics with the philosophy of the theory of competitive on‐line algorithms, has …
of Bayesian statistics with the philosophy of the theory of competitive on‐line algorithms, has …
A decision-theoretic generalization of on-line learning and an application to boosting
Y Freund, RE Schapire - Journal of computer and system sciences, 1997 - Elsevier
In the first part of the paper we consider the problem of dynamically apportioning resources
among a set of options in a worst-case on-line framework. The model we study can be …
among a set of options in a worst-case on-line framework. The model we study can be …
A desicion-theoretic generalization of on-line learning and an application to boosting
Y Freund, RE Schapire - European conference on computational learning …, 1995 - Springer
We consider the problem of dynamically apportioning resources among a set of options in a
worst-case on-line framework. The model we study can be interpreted as a broad, abstract …
worst-case on-line framework. The model we study can be interpreted as a broad, abstract …
Exponentiated gradient versus gradient descent for linear predictors
J Kivinen, MK Warmuth - information and computation, 1997 - Elsevier
We consider two algorithms for on-line prediction based on a linear model. The algorithms
are the well-known gradient descent (GD) algorithm and a new algorithm, which we call …
are the well-known gradient descent (GD) algorithm and a new algorithm, which we call …
How to use expert advice
We analyze algorithms that predict a binary value by combining the predictions of several
prediction strategies, called experts. Our analysis is for worst-case situations, ie, we make no …
prediction strategies, called experts. Our analysis is for worst-case situations, ie, we make no …
Tracking the best expert
M Herbster, MK Warmuth - Machine learning, 1998 - Springer
We generalize the recent relative loss bounds for on-line algorithms where the additional
loss of the algorithm on the whole sequence of examples over the loss of the best expert is …
loss of the algorithm on the whole sequence of examples over the loss of the best expert is …
Xnas: Neural architecture search with expert advice
This paper introduces a novel optimization method for differential neural architecture search,
based on the theory of prediction with expert advice. Its optimization criterion is well fitted for …
based on the theory of prediction with expert advice. Its optimization criterion is well fitted for …
[PDF][PDF] A game of prediction with expert advice
VG Vovk - Proceedings of the eighth annual conference on …, 1995 - dl.acm.org
We consider the following situation. At each point of discrete time the learner must make a
prediction; he is given the predictions made by a pool of experts. Each prediction and the …
prediction; he is given the predictions made by a pool of experts. Each prediction and the …
Regret in the on-line decision problem
At each point in time a decision maker must make a decision. The payoff in a period from the
decision made depends on the decision as well as on the state of the world that obtains at …
decision made depends on the decision as well as on the state of the world that obtains at …
[PDF][PDF] Using and combining predictors that specialize
Y Freund, RE Schapire, Y Singer… - Proceedings of the twenty …, 1997 - dl.acm.org
We study online learning algorithms that predict by combining the predictions of severrd
subordinate prediction algorithms, sometimes crdled “experts.“These simple algorithms …
subordinate prediction algorithms, sometimes crdled “experts.“These simple algorithms …