Uncertainty quantification in scientific machine learning: Methods, metrics, and comparisons

AF Psaros, X Meng, Z Zou, L Guo… - Journal of Computational …, 2023 - Elsevier
Neural networks (NNs) are currently changing the computational paradigm on how to
combine data with mathematical laws in physics and engineering in a profound way …

Epistemic neural networks

I Osband, Z Wen, SM Asghari… - Advances in …, 2023 - proceedings.neurips.cc
Intelligence relies on an agent's knowledge of what it does not know. This capability can be
assessed based on the quality of joint predictions of labels across multiple inputs. In …

Position paper: Bayesian deep learning in the age of large-scale ai

T Papamarkou, M Skoularidou, K Palla… - arXiv e …, 2024 - ui.adsabs.harvard.edu
In the current landscape of deep learning research, there is a predominant emphasis on
achieving high predictive accuracy in supervised tasks involving large image and language …

Position: Bayesian Deep Learning is Needed in the Age of Large-Scale AI

T Papamarkou, M Skoularidou, K Palla… - … on Machine Learning, 2024 - openreview.net
In the current landscape of deep learning research, there is a predominant emphasis on
achieving high predictive accuracy in supervised tasks involving large image and language …

An analysis of ensemble sampling

C Qin, Z Wen, X Lu, B Van Roy - Advances in Neural …, 2022 - proceedings.neurips.cc
Ensemble sampling serves as a practical approximation to Thompson sampling when
maintaining an exact posterior distribution over model parameters is computationally …

Nonstationary bandit learning via predictive sampling

Y Liu, B Van Roy, K Xu - International Conference on …, 2023 - proceedings.mlr.press
Thompson sampling has proven effective across a wide range of stationary bandit
environments. However, as we demonstrate in this paper, it can perform poorly when …

The neural testbed: Evaluating joint predictions

I Osband, Z Wen, SM Asghari… - Advances in …, 2022 - proceedings.neurips.cc
Predictive distributions quantify uncertainties ignored by point estimates. This paper
introduces The Neural Testbed: an open source benchmark for controlled and principled …

Experts Don't Cheat: Learning What You Don't Know By Predicting Pairs

DD Johnson, D Tarlow, D Duvenaud… - arXiv preprint arXiv …, 2024 - arxiv.org
Identifying how much a model ${\widehat {p}} _ {\theta}(Y| X) $ knows about the stochastic
real-world process $ p (Y| X) $ it was trained on is important to ensure it avoids producing …

Promises and pitfalls of the linearized Laplace in Bayesian optimization

A Kristiadi, A Immer, R Eschenhagen… - arXiv preprint arXiv …, 2023 - arxiv.org
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in
constructing Bayesian neural networks. It is theoretically compelling since it can be seen as …

To Believe or Not to Believe Your LLM

YA Yadkori, I Kuzborskij, A György… - arXiv preprint arXiv …, 2024 - arxiv.org
We explore uncertainty quantification in large language models (LLMs), with the goal to
identify when uncertainty in responses given a query is large. We simultaneously consider …