Second opinion needed: communicating uncertainty in medical machine learning
There is great excitement that medical artificial intelligence (AI) based on machine learning
(ML) can be used to improve decision making at the patient level in a variety of healthcare …
(ML) can be used to improve decision making at the patient level in a variety of healthcare …
Uncertainty quantification in machine learning for engineering design and health prognostics: A tutorial
On top of machine learning (ML) models, uncertainty quantification (UQ) functions as an
essential layer of safety assurance that could lead to more principled decision making by …
essential layer of safety assurance that could lead to more principled decision making by …
Resmlp: Feedforward networks for image classification with data-efficient training
We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image
classification. It is a simple residual network that alternates (i) a linear layer in which image …
classification. It is a simple residual network that alternates (i) a linear layer in which image …
Dataset distillation with infinitely wide convolutional networks
The effectiveness of machine learning algorithms arises from being able to extract useful
features from large amounts of data. As model and dataset sizes increase, dataset …
features from large amounts of data. As model and dataset sizes increase, dataset …
[PDF][PDF] The computational limits of deep learning
Deep learning's recent history has been one of achievement: from triumphing over humans
in the game of Go to world-leading performance in image classification, voice recognition …
in the game of Go to world-leading performance in image classification, voice recognition …
Similarity of neural network representations revisited
Recent work has sought to understand the behavior of neural networks by comparing
representations between layers and between different trained models. We examine methods …
representations between layers and between different trained models. We examine methods …
On exact computation with an infinitely wide neural net
How well does a classic deep net architecture like AlexNet or VGG19 classify on a standard
dataset such as CIFAR-10 when its “width”—namely, number of channels in convolutional …
dataset such as CIFAR-10 when its “width”—namely, number of channels in convolutional …
The generalization error of random features regression: Precise asymptotics and the double descent curve
S Mei, A Montanari - Communications on Pure and Applied …, 2022 - Wiley Online Library
Deep learning methods operate in regimes that defy the traditional statistical mindset.
Neural network architectures often contain more parameters than training samples, and are …
Neural network architectures often contain more parameters than training samples, and are …
Wide neural networks of any depth evolve as linear models under gradient descent
A longstanding goal in deep learning research has been to precisely characterize training
and generalization. However, the often complex loss landscapes of neural networks have …
and generalization. However, the often complex loss landscapes of neural networks have …
Efficient dataset distillation using random feature approximation
Dataset distillation compresses large datasets into smaller synthetic coresets which retain
performance with the aim of reducing the storage and computational burden of processing …
performance with the aim of reducing the storage and computational burden of processing …