A tutorial on the non-asymptotic theory of system identification
This tutorial serves as an introduction to recently developed non-asymptotic methods in the
theory of-mainly linear-system identification. We emphasize tools we deem particularly …
theory of-mainly linear-system identification. We emphasize tools we deem particularly …
Sharp rates in dependent learning theory: Avoiding sample size deflation for the square loss
In this work, we study statistical learning with dependent ($\beta $-mixing) data and square
loss in a hypothesis class $\mathscr {F}\subset L_ {\Psi_p} $ where $\Psi_p $ is the norm …
loss in a hypothesis class $\mathscr {F}\subset L_ {\Psi_p} $ where $\Psi_p $ is the norm …
Guarantees for nonlinear representation learning: non-identical covariates, dependent data, fewer samples
A driving force behind the diverse applicability of modern machine learning is the ability to
extract meaningful features across many sources. However, many practical domains involve …
extract meaningful features across many sources. However, many practical domains involve …
Asymptotics of Linear Regression with Linearly Dependent Data
In this paper we study the asymptotics of linear regression in settings where the covariates
exhibit a linear dependency structure, departing from the standard assumption of …
exhibit a linear dependency structure, departing from the standard assumption of …
[PDF][PDF] HDI Lab research projects 2023/24
A Goldman, E Frolov, E Kosov, E Lagutin, I Levin… - cs.hse.ru
2dWt,(1) whereU is some smooth function and (Wt) t≥ 0 is a Wiener process. Under some
regularity condition, the unique invariant distribution of (1) is given by π (x)∝ e− U (x) …
regularity condition, the unique invariant distribution of (1) is given by π (x)∝ e− U (x) …