关注
Nicole Mücke
Nicole Mücke
Technical University Brunswick
在 tu-braunschweig.de 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Optimal rates for regularization of statistical inverse learning problems
G Blanchard, N Mücke
Foundations of Computational Mathematics 18 (4), 971-1013, 2018
1382018
Parallelizing spectrally regularized kernel algorithms
N MÞcke, G Blanchard
Journal of Machine Learning Research 19 (30), 1-29, 2018
552018
Beating SGD saturation with tail-averaging and minibatching
N Mücke, G Neu, L Rosasco
Advances in Neural Information Processing Systems 32, 2019
452019
Learning linear operators: Infinite-dimensional regression as a well-behaved non-compact inverse problem
M Mollenhauer, N Mücke, TJ Sullivan
arXiv preprint arXiv:2211.08875, 2022
282022
Reproducing kernel Hilbert spaces on manifolds: Sobolev and diffusion spaces
E De Vito, N Mücke, L Rosasco
Analysis and Applications 19 (03), 363-396, 2021
272021
Parallelizing spectral algorithms for kernel learning
G Blanchard, N Mücke
arXiv preprint arXiv:1610.07487, 2016
192016
Optimal rates for regularization of statistical inverse learning problems
G Blanchard, N Mücke
arXiv preprint arXiv:1604.04054, 2016
172016
Data-splitting improves statistical performance in overparameterized regimes
N Mücke, E Reiss, J Rungenhagen, M Klein
International Conference on Artificial Intelligence and Statistics, 10322-10350, 2022
152022
Reducing training time by efficient localized kernel regression
N Müecke
The 22nd International Conference on Artificial Intelligence and Statistics …, 2019
142019
Global minima of DNNs: The plenty pantry
N Mücke, I Steinwart
arXiv preprint arXiv:1905.10686, 169, 2019
122019
Lepskii principle in supervised learning
G Blanchard, P Mathé, N Mücke
arXiv preprint arXiv:1905.10764, 2019
112019
Stochastic gradient descent meets distribution regression
N Mücke
International Conference on Artificial Intelligence and Statistics, 2143-2151, 2021
82021
Kernel regression, minimax rates and effective dimensionality: Beyond the regular case
G Blanchard, N Mücke
Analysis and Applications 18 (04), 683-696, 2020
82020
From inexact optimization to learning via gradient concentration
B Stankewitz, N Mücke, L Rosasco
Computational Optimization and Applications 84 (1), 265-294, 2023
72023
Stochastic gradient descent in Hilbert scales: Smoothness, preconditioning and earlier stopping
N Mücke, E Reiss
arXiv preprint arXiv:2006.10840, 2020
72020
Adaptivity for regularized kernel methods by Lepskii's principle
N Mücke
arXiv preprint arXiv:1804.05433, 2018
32018
Empirical Risk Minimization in the Interpolating Regime with Application to Neural Network Learning
N Mücke, I Steinwart
arXiv preprint arXiv:1905.10686, 2019
22019
Kernel regression, minimax rates and effective dimensionality: Beyond the regular case
G Blanchard, N Mücke
arXiv preprint arXiv:1611.03979, 2016
22016
How many neurons do we need? A refined analysis for shallow networks trained with gradient descent
M Nguyen, N Mücke
Journal of Statistical Planning and Inference 233, 106169, 2024
12024
Statistical inverse learning problems with random observations
T Helin, N Mücke
arXiv preprint arXiv:2312.15341, 2023
12023
系统目前无法执行此操作,请稍后再试。
文章 1–20