Neural network approximation

R DeVore, B Hanin, G Petrova - Acta Numerica, 2021 - cambridge.org
Neural networks (NNs) are the method of choice for building learning algorithms. They are
now being investigated for other numerical tasks such as solving high-dimensional partial …

[图书][B] A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of Black–Scholes partial differential equations

Artificial neural networks (ANNs) have very successfully been used in numerical simulations
for a series of computational problems ranging from image classification/image recognition …

Optimal approximation with sparsely connected deep neural networks

H Bolcskei, P Grohs, G Kutyniok, P Petersen - SIAM Journal on Mathematics of …, 2019 - SIAM
We derive fundamental lower bounds on the connectivity and the memory requirements of
deep neural networks guaranteeing uniform approximation rates for arbitrary function …

Analysis of the generalization error: Empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical …

J Berner, P Grohs, A Jentzen - SIAM Journal on Mathematics of Data Science, 2020 - SIAM
The development of new classification and regression algorithms based on empirical risk
minimization (ERM) over deep neural network hypothesis classes, coined deep learning …

Approximation theory of the MLP model in neural networks

A Pinkus - Acta numerica, 1999 - cambridge.org
In this survey we discuss various approximation-theoretic problems that arise in the
multilayer feedforward perceptron (MLP) model in neural networks. The MLP model is one of …

Deep neural network approximation theory

D Elbrächter, D Perekrestenko, P Grohs… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
This paper develops fundamental limits of deep neural network learning by characterizing
what is possible if no constraints are imposed on the learning algorithm and on the amount …

[图书][B] Ridgelets: theory and applications

EJ Candes - 1998 - search.proquest.com
Single hidden-layer feedforward neural networks have been proposed as an approach to by-
pass the curse of dimensionality and are now becoming widely applied to approximation or …

A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with …

A Jentzen, D Salimova, T Welti - arXiv preprint arXiv:1809.07321, 2018 - arxiv.org
In recent years deep artificial neural networks (DNNs) have been successfully employed in
numerical simulations for a multitude of computational problems including, for example …

Approximation spaces of deep neural networks

R Gribonval, G Kutyniok, M Nielsen… - Constructive …, 2022 - Springer
We study the expressivity of deep neural networks. Measuring a network's complexity by its
number of connections or by its number of neurons, we consider the class of functions for …

Deep neural network approximation theory

D Elbrächter, D Perekrestenko, P Grohs… - arXiv preprint arXiv …, 2019 - arxiv.org
This paper develops fundamental limits of deep neural network learning by characterizing
what is possible if no constraints are imposed on the learning algorithm and on the amount …