Uncertainty quantification in scientific machine learning: Methods, metrics, and comparisons
Neural networks (NNs) are currently changing the computational paradigm on how to
combine data with mathematical laws in physics and engineering in a profound way …
combine data with mathematical laws in physics and engineering in a profound way …
Epistemic neural networks
Intelligence relies on an agent's knowledge of what it does not know. This capability can be
assessed based on the quality of joint predictions of labels across multiple inputs. In …
assessed based on the quality of joint predictions of labels across multiple inputs. In …
Position paper: Bayesian deep learning in the age of large-scale ai
In the current landscape of deep learning research, there is a predominant emphasis on
achieving high predictive accuracy in supervised tasks involving large image and language …
achieving high predictive accuracy in supervised tasks involving large image and language …
Position: Bayesian Deep Learning is Needed in the Age of Large-Scale AI
In the current landscape of deep learning research, there is a predominant emphasis on
achieving high predictive accuracy in supervised tasks involving large image and language …
achieving high predictive accuracy in supervised tasks involving large image and language …
An analysis of ensemble sampling
Ensemble sampling serves as a practical approximation to Thompson sampling when
maintaining an exact posterior distribution over model parameters is computationally …
maintaining an exact posterior distribution over model parameters is computationally …
Nonstationary bandit learning via predictive sampling
Thompson sampling has proven effective across a wide range of stationary bandit
environments. However, as we demonstrate in this paper, it can perform poorly when …
environments. However, as we demonstrate in this paper, it can perform poorly when …
The neural testbed: Evaluating joint predictions
Predictive distributions quantify uncertainties ignored by point estimates. This paper
introduces The Neural Testbed: an open source benchmark for controlled and principled …
introduces The Neural Testbed: an open source benchmark for controlled and principled …
Experts Don't Cheat: Learning What You Don't Know By Predicting Pairs
Identifying how much a model ${\widehat {p}} _ {\theta}(Y| X) $ knows about the stochastic
real-world process $ p (Y| X) $ it was trained on is important to ensure it avoids producing …
real-world process $ p (Y| X) $ it was trained on is important to ensure it avoids producing …
Promises and pitfalls of the linearized Laplace in Bayesian optimization
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in
constructing Bayesian neural networks. It is theoretically compelling since it can be seen as …
constructing Bayesian neural networks. It is theoretically compelling since it can be seen as …
To Believe or Not to Believe Your LLM
We explore uncertainty quantification in large language models (LLMs), with the goal to
identify when uncertainty in responses given a query is large. We simultaneously consider …
identify when uncertainty in responses given a query is large. We simultaneously consider …