Synthetic control as online linear regression
J Chen - Econometrica, 2023 - Wiley Online Library
This paper notes a simple connection between synthetic control and online learning.
Specifically, we recognize synthetic control as an instance of Follow‐The‐Leader (FTL) …
Specifically, we recognize synthetic control as an instance of Follow‐The‐Leader (FTL) …
Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices
J Mourtada - The Annals of Statistics, 2022 - projecteuclid.org
Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices
Page 1 The Annals of Statistics 2022, Vol. 50, No. 4, 2157–2178 https://doi.org/10.1214/22-AOS2181 …
Page 1 The Annals of Statistics 2022, Vol. 50, No. 4, 2157–2178 https://doi.org/10.1214/22-AOS2181 …
Distributed online linear regressions
We study online linear regression problems in a distributed setting, where the data is spread
over a network. In each round, each network node proposes a linear predictor, with the …
over a network. In each round, each network node proposes a linear predictor, with the …
Multi-agent Online Optimization
This monograph provides an overview of distributed online optimization in multi-agent
systems. Online optimization approaches planning and decision problems from a robust …
systems. Online optimization approaches planning and decision problems from a robust …
Stochastic online linear regression: the forward algorithm to replace ridge
R Ouhamma, OA Maillard… - Advances in Neural …, 2021 - proceedings.neurips.cc
We consider the problem of online linear regression in the stochastic setting. We derive high
probability regret bounds for online $\textit {ridge} $ regression and the $\textit {forward} …
probability regret bounds for online $\textit {ridge} $ regression and the $\textit {forward} …
The Gain from Ordering in Online Learning
We study fixed-design online learning where the learner is allowed to choose the order of
the datapoints in order to minimize their regret (aka self-directed online learning). We focus …
the datapoints in order to minimize their regret (aka self-directed online learning). We focus …
Bandit learning with general function classes: Heteroscedastic noise and variance-dependent regret bounds
We consider learning a stochastic bandit model, where the reward function belongs to a
general class of uniformly bounded functions, and the additive noise can be …
general class of uniformly bounded functions, and the additive noise can be …
Online instrumental variable regression: Regret analysis and bandit feedback
R Della Vecchia, D Basu - arXiv preprint arXiv:2302.09357, 2023 - arxiv.org
Endogeneity, ie the dependence between noise and covariates, is a common phenomenon
in real data due to omitted variables, strategic behaviours, measurement errors etc. In …
in real data due to omitted variables, strategic behaviours, measurement errors etc. In …
Quasi-newton steps for efficient online exp-concave optimization
Z Mhammedi, K Gatmiry - The Thirty Sixth Annual …, 2023 - proceedings.mlr.press
The aim of this paper is to design computationally-efficient and optimal algorithms for the
online and stochastic exp-concave optimization settings. Typical algorithms for these …
online and stochastic exp-concave optimization settings. Typical algorithms for these …
Refined risk bounds for unbounded losses via transductive priors
We revisit the sequential variants of linear regression with the squared loss, classification
problems with hinge loss, and logistic regression, all characterized by unbounded losses in …
problems with hinge loss, and logistic regression, all characterized by unbounded losses in …