Priors in bayesian deep learning: A review

V Fortuin - International Statistical Review, 2022 - Wiley Online Library
While the choice of prior is one of the most critical parts of the Bayesian inference workflow,
recent Bayesian deep learning models have often fallen back on vague priors, such as …

An optimization-centric view on Bayes' rule: Reviewing and generalizing variational inference

J Knoblauch, J Jewson, T Damoulas - Journal of Machine Learning …, 2022 - jmlr.org
We advocate an optimization-centric view of Bayesian inference. Our inspiration is the
representation of Bayes' rule as infinite-dimensional optimization (Csisz´ r, 1975; Donsker …

A primer on Bayesian neural networks: review and debates

J Arbel, K Pitas, M Vladimirova, V Fortuin - arXiv preprint arXiv:2309.16314, 2023 - arxiv.org
Neural networks have achieved remarkable performance across various problem domains,
but their widespread applicability is hindered by inherent limitations such as overconfidence …

Sampling weights of deep neural networks

EL Bolager, I Burak, C Datar, Q Sun… - Advances in Neural …, 2023 - proceedings.neurips.cc
We introduce a probability distribution, combined with an efficient sampling algorithm, for
weights and biases of fully-connected neural networks. In a supervised learning context, no …

All you need is a good functional prior for Bayesian deep learning

BH Tran, S Rossi, D Milios, M Filippone - Journal of Machine Learning …, 2022 - jmlr.org
The Bayesian treatment of neural networks dictates that a prior distribution is specified over
their weight and bias parameters. This poses a challenge because modern neural networks …

Coherent Blending of Biophysics-Based Knowledge with Bayesian Neural Networks for Robust Protein Property Prediction

H Nisonoff, Y Wang, J Listgarten - ACS Synthetic Biology, 2023 - ACS Publications
Predicting properties of proteins is of interest for basic biological understanding and protein
engineering alike. Increasingly, machine learning (ML) approaches are being used for this …

Incorporating unlabelled data into Bayesian neural networks

M Sharma, T Rainforth, YW Teh, V Fortuin - arXiv preprint arXiv …, 2023 - arxiv.org
Conventional Bayesian Neural Networks (BNNs) cannot leverage unlabelled data to
improve their predictions. To overcome this limitation, we introduce Self-Supervised …

Quantitative Gaussian approximation of randomly initialized deep neural networks

A Basteri, D Trevisan - Machine Learning, 2024 - Springer
Given any deep fully connected neural network, initialized with random Gaussian
parameters, we bound from above the quadratic Wasserstein distance between its output …

Non-asymptotic approximations of neural networks by Gaussian processes

R Eldan, D Mikulincer… - Conference on Learning …, 2021 - proceedings.mlr.press
We study the extent to which wide neural networks may be approximated by Gaussian
processes, when initialized with random weights. It is a well-established fact that as the …

Incorporating prior knowledge into neural networks through an implicit composite kernel

Z Jiang, T Zheng, Y Liu, D Carlson - arXiv preprint arXiv:2205.07384, 2022 - arxiv.org
It is challenging to guide neural network (NN) learning with prior knowledge. In contrast,
many known properties, such as spatial smoothness or seasonality, are straightforward to …