User-friendly introduction to PAC-Bayes bounds

P Alquier - Foundations and Trends® in Machine Learning, 2024 - nowpublishers.com
Aggregated predictors are obtained by making a set of basic predictors vote according to
some weights, that is, to some probability distribution. Randomized predictors are obtained …

U-Sleep: resilient high-frequency sleep staging

M Perslev, S Darkner, L Kempfner, M Nikolic… - NPJ digital …, 2021 - nature.com
Sleep disorders affect a large portion of the global population and are strong predictors of
morbidity and all-cause mortality. Sleep staging segments a period of sleep into a sequence …

Tighter risk certificates for neural networks

M Pérez-Ortiz, O Rivasplata, J Shawe-Taylor… - Journal of Machine …, 2021 - jmlr.org
This paper presents an empirical study regarding training probabilistic neural networks
using training objectives derived from PAC-Bayes bounds. In the context of probabilistic …

Learning under model misspecification: Applications to variational and ensemble methods

A Masegosa - Advances in Neural Information Processing …, 2020 - proceedings.neurips.cc
Virtually any model we use in machine learning to make predictions does not perfectly
represent reality. So, most of the learning happens under model misspecification. In this …

PAC-Bayes analysis beyond the usual bounds

O Rivasplata, I Kuzborskij… - Advances in …, 2020 - proceedings.neurips.cc
We focus on a stochastic learning model where the learner observes a finite set of training
examples and the output of the learning process is a data-dependent distribution over a …

MMD-FUSE: Learning and combining kernels for two-sample testing without data splitting

F Biggs, A Schrab, A Gretton - Advances in Neural …, 2024 - proceedings.neurips.cc
We propose novel statistics which maximise the power of a two-sample test based on the
Maximum Mean Discrepancy (MMD), byadapting over the set of kernels used in defining it …

Diversity and generalization in neural network ensembles

LA Ortega, R Cabañas… - … Conference on Artificial …, 2022 - proceedings.mlr.press
Ensembles are widely used in machine learning and, usually, provide state-of-the-art
performance in many prediction tasks. From the very beginning, the diversity of an ensemble …

Non-vacuous generalisation bounds for shallow neural networks

F Biggs, B Guedj - International Conference on Machine …, 2022 - proceedings.mlr.press
We focus on a specific class of shallow neural networks with a single hidden layer, namely
those with $ L_2 $-normalised data and either a sigmoid-shaped Gaussian error function …

Joint training of deep ensembles fails due to learner collusion

A Jeffares, T Liu, J Crabbé… - Advances in Neural …, 2024 - proceedings.neurips.cc
Ensembles of machine learning models have been well established as a powerful method of
improving performance over a single model. Traditionally, ensembling algorithms train their …

How tight can PAC-Bayes be in the small data regime?

A Foong, W Bruinsma, D Burt… - Advances in Neural …, 2021 - proceedings.neurips.cc
In this paper, we investigate the question: _Given a small number of datapoints, for example
$ N= 30$, how tight can PAC-Bayes and test set bounds be made? _ For such small …