[HTML][HTML] A review of uncertainty quantification in deep learning: Techniques, applications and challenges

M Abdar, F Pourpanah, S Hussain, D Rezazadegan… - Information fusion, 2021 - Elsevier
Uncertainty quantification (UQ) methods play a pivotal role in reducing the impact of
uncertainties during both optimization and decision making processes. They have been …

Priors in bayesian deep learning: A review

V Fortuin - International Statistical Review, 2022 - Wiley Online Library
While the choice of prior is one of the most critical parts of the Bayesian inference workflow,
recent Bayesian deep learning models have often fallen back on vague priors, such as …

Finite versus infinite neural networks: an empirical study

J Lee, S Schoenholz, J Pennington… - Advances in …, 2020 - proceedings.neurips.cc
We perform a careful, thorough, and large scale empirical study of the correspondence
between wide neural networks and kernel methods. By doing so, we resolve a variety of …

Bayesian neural network priors revisited

V Fortuin, A Garriga-Alonso, SW Ober, F Wenzel… - arXiv preprint arXiv …, 2021 - arxiv.org
Isotropic Gaussian priors are the de facto standard for modern Bayesian neural network
inference. However, it is unclear whether these priors accurately reflect our true beliefs …

Learning layer-wise equivariances automatically using gradients

T van der Ouderaa, A Immer… - Advances in Neural …, 2024 - proceedings.neurips.cc
Convolutions encode equivariance symmetries into neural networks leading to better
generalisation performance. However, symmetries provide fixed hard constraints on the …

Invariance learning in deep neural networks with differentiable laplace approximations

A Immer, T van der Ouderaa… - Advances in …, 2022 - proceedings.neurips.cc
Data augmentation is commonly applied to improve performance of deep learning by
enforcing the knowledge that certain transformations on the input preserve the output …

Adapting the linearised laplace model evidence for modern deep learning

J Antorán, D Janz, JU Allingham… - International …, 2022 - proceedings.mlr.press
The linearised Laplace method for estimating model uncertainty has received renewed
attention in the Bayesian deep learning community. The method provides reliable error bars …

Stochastic marginal likelihood gradients using neural tangent kernels

A Immer, TFA Van Der Ouderaa… - International …, 2023 - proceedings.mlr.press
Selecting hyperparameters in deep learning greatly impacts its effectiveness but requires
manual effort and expertise. Recent works show that Bayesian model selection with Laplace …

Bayesian low-rank adaptation for large language models

AX Yang, M Robeyns, X Wang, L Aitchison - arXiv preprint arXiv …, 2023 - arxiv.org
Parameter-efficient fine-tuning (PEFT) has emerged as a new paradigm for cost-efficient fine-
tuning of large language models (LLMs), with low-rank adaptation (LoRA) being a widely …

Function-space regularization in neural networks: A probabilistic perspective

TGJ Rudner, S Kapoor, S Qiu… - … on Machine Learning, 2023 - proceedings.mlr.press
Parameter-space regularization in neural network optimization is a fundamental tool for
improving generalization. However, standard parameter-space regularization methods …