Optimal rates for regularization of statistical inverse learning problems G Blanchard, N Mücke Foundations of Computational Mathematics 18 (4), 971-1013, 2018 | 138 | 2018 |
Parallelizing spectrally regularized kernel algorithms N MÞcke, G Blanchard Journal of Machine Learning Research 19 (30), 1-29, 2018 | 55 | 2018 |
Beating SGD saturation with tail-averaging and minibatching N Mücke, G Neu, L Rosasco Advances in Neural Information Processing Systems 32, 2019 | 45 | 2019 |
Learning linear operators: Infinite-dimensional regression as a well-behaved non-compact inverse problem M Mollenhauer, N Mücke, TJ Sullivan arXiv preprint arXiv:2211.08875, 2022 | 28 | 2022 |
Reproducing kernel Hilbert spaces on manifolds: Sobolev and diffusion spaces E De Vito, N Mücke, L Rosasco Analysis and Applications 19 (03), 363-396, 2021 | 27 | 2021 |
Parallelizing spectral algorithms for kernel learning G Blanchard, N Mücke arXiv preprint arXiv:1610.07487, 2016 | 19 | 2016 |
Optimal rates for regularization of statistical inverse learning problems G Blanchard, N Mücke arXiv preprint arXiv:1604.04054, 2016 | 17 | 2016 |
Data-splitting improves statistical performance in overparameterized regimes N Mücke, E Reiss, J Rungenhagen, M Klein International Conference on Artificial Intelligence and Statistics, 10322-10350, 2022 | 15 | 2022 |
Reducing training time by efficient localized kernel regression N Müecke The 22nd International Conference on Artificial Intelligence and Statistics …, 2019 | 14 | 2019 |
Global minima of DNNs: The plenty pantry N Mücke, I Steinwart arXiv preprint arXiv:1905.10686, 169, 2019 | 12 | 2019 |
Lepskii principle in supervised learning G Blanchard, P Mathé, N Mücke arXiv preprint arXiv:1905.10764, 2019 | 11 | 2019 |
Stochastic gradient descent meets distribution regression N Mücke International Conference on Artificial Intelligence and Statistics, 2143-2151, 2021 | 8 | 2021 |
Kernel regression, minimax rates and effective dimensionality: Beyond the regular case G Blanchard, N Mücke Analysis and Applications 18 (04), 683-696, 2020 | 8 | 2020 |
From inexact optimization to learning via gradient concentration B Stankewitz, N Mücke, L Rosasco Computational Optimization and Applications 84 (1), 265-294, 2023 | 7 | 2023 |
Stochastic gradient descent in Hilbert scales: Smoothness, preconditioning and earlier stopping N Mücke, E Reiss arXiv preprint arXiv:2006.10840, 2020 | 7 | 2020 |
Adaptivity for regularized kernel methods by Lepskii's principle N Mücke arXiv preprint arXiv:1804.05433, 2018 | 3 | 2018 |
Empirical Risk Minimization in the Interpolating Regime with Application to Neural Network Learning N Mücke, I Steinwart arXiv preprint arXiv:1905.10686, 2019 | 2 | 2019 |
Kernel regression, minimax rates and effective dimensionality: Beyond the regular case G Blanchard, N Mücke arXiv preprint arXiv:1611.03979, 2016 | 2 | 2016 |
How many neurons do we need? A refined analysis for shallow networks trained with gradient descent M Nguyen, N Mücke Journal of Statistical Planning and Inference 233, 106169, 2024 | 1 | 2024 |
Statistical inverse learning problems with random observations T Helin, N Mücke arXiv preprint arXiv:2312.15341, 2023 | 1 | 2023 |