A tutorial on the non-asymptotic theory of system identification

I Ziemann, A Tsiamis, B Lee, Y Jedra… - 2023 62nd IEEE …, 2023 - ieeexplore.ieee.org
This tutorial serves as an introduction to recently developed non-asymptotic methods in the
theory of-mainly linear-system identification. We emphasize tools we deem particularly …

Sharp rates in dependent learning theory: Avoiding sample size deflation for the square loss

I Ziemann, S Tu, GJ Pappas, N Matni - arXiv preprint arXiv:2402.05928, 2024 - arxiv.org
In this work, we study statistical learning with dependent ($\beta $-mixing) data and square
loss in a hypothesis class $\mathscr {F}\subset L_ {\Psi_p} $ where $\Psi_p $ is the norm …

Guarantees for nonlinear representation learning: non-identical covariates, dependent data, fewer samples

TT Zhang, BD Lee, I Ziemann, GJ Pappas… - arXiv preprint arXiv …, 2024 - arxiv.org
A driving force behind the diverse applicability of modern machine learning is the ability to
extract meaningful features across many sources. However, many practical domains involve …

Asymptotics of Linear Regression with Linearly Dependent Data

B Moniri, H Hassani - arXiv preprint arXiv:2412.03702, 2024 - arxiv.org
In this paper we study the asymptotics of linear regression in settings where the covariates
exhibit a linear dependency structure, departing from the standard assumption of …

[PDF][PDF] HDI Lab research projects 2023/24

A Goldman, E Frolov, E Kosov, E Lagutin, I Levin… - cs.hse.ru
2dWt,(1) whereU is some smooth function and (Wt) t≥ 0 is a Wiener process. Under some
regularity condition, the unique invariant distribution of (1) is given by π (x)∝ e− U (x) …