[HTML][HTML] A sharp upper bound for sampling numbers in L2
For a class F of complex-valued functions on a set D, we denote by gn (F) its sampling
numbers, ie, the minimal worst-case error on F, measured in L 2, that can be achieved with a …
numbers, ie, the minimal worst-case error on F, measured in L 2, that can be achieved with a …
A new upper bound for sampling numbers
We provide a new upper bound for sampling numbers (gn) n∈ N associated with the
compact embedding of a separable reproducing kernel Hilbert space into the space of …
compact embedding of a separable reproducing kernel Hilbert space into the space of …
[HTML][HTML] Function values are enough for L2-approximation: Part II
In the first part we have shown that, for L 2-approximation of functions from a separable
Hilbert space in the worst-case setting, linear algorithms based on function values are …
Hilbert space in the worst-case setting, linear algorithms based on function values are …
Function Values Are Enough for -Approximation
We study the L_2 L 2-approximation of functions from a Hilbert space and compare the
sampling numbers with the approximation numbers. The sampling number e_n en is the …
sampling numbers with the approximation numbers. The sampling number e_n en is the …
Worst-case recovery guarantees for least squares approximation using random samples
We construct a least squares approximation method for the recovery of complex-valued
functions from a reproducing kernel Hilbert space on D⊂ R d. The nodes are drawn at …
functions from a reproducing kernel Hilbert space on D⊂ R d. The nodes are drawn at …
On the power of standard information for tractability for L∞ approximation of periodic functions in the worst case setting
J Geng, H Wang - Journal of Complexity, 2024 - Elsevier
We study multivariate approximation of periodic functions in the worst case setting with the
error measured in the L∞ norm. We consider algorithms that use standard information Λ std …
error measured in the L∞ norm. We consider algorithms that use standard information Λ std …
Random points are optimal for the approximation of Sobolev functions
D Krieg, M Sonnleitner - IMA Journal of Numerical Analysis, 2024 - academic.oup.com
We show that independent and uniformly distributed sampling points are asymptotically as
good as optimal sampling points for the approximation of functions from Sobolev spaces on …
good as optimal sampling points for the approximation of functions from Sobolev spaces on …
Random sections of ellipsoids and the power of random information
We study the circumradius of the intersection of an $ m $-dimensional ellipsoid $\mathcal E
$ with semi-axes $\sigma _1\geq\dots\geq\sigma _m $ with random subspaces of …
$ with semi-axes $\sigma _1\geq\dots\geq\sigma _m $ with random subspaces of …
Speeding up Monte Carlo integration: Control neighbors for optimal convergence
A novel linear integration rule called $\textit {control neighbors} $ is proposed in which
nearest neighbor estimates act as control variates to speed up the convergence rate of the …
nearest neighbor estimates act as control variates to speed up the convergence rate of the …
On the power of iid information for linear approximation
M Sonnleitner, M Ullrich - arXiv preprint arXiv:2310.12740, 2023 - arxiv.org
This survey is concerned with the power of random information for approximation in the
(deterministic) worst-case setting, with special emphasis on information that is obtained …
(deterministic) worst-case setting, with special emphasis on information that is obtained …