User-friendly introduction to PAC-Bayes bounds
P Alquier - Foundations and Trends® in Machine Learning, 2024 - nowpublishers.com
Aggregated predictors are obtained by making a set of basic predictors vote according to
some weights, that is, to some probability distribution. Randomized predictors are obtained …
some weights, that is, to some probability distribution. Randomized predictors are obtained …
Improving self-supervised learning by characterizing idealized representations
Despite the empirical successes of self-supervised learning (SSL) methods, it is unclear
what characteristics of their representations lead to high downstream accuracies. In this …
what characteristics of their representations lead to high downstream accuracies. In this …
Robustness verification for contrastive learning
Contrastive adversarial training has successfully improved the robustness of contrastive
learning (CL). However, the robustness metric used in these methods is linked to attack …
learning (CL). However, the robustness metric used in these methods is linked to attack …
Generalization bounds: Perspectives from information theory and PAC-Bayes
A fundamental question in theoretical machine learning is generalization. Over the past
decades, the PAC-Bayesian approach has been established as a flexible framework to …
decades, the PAC-Bayesian approach has been established as a flexible framework to …
Learning via Wasserstein-based high probability generalisation bounds
P Viallard, M Haddouche… - Advances in Neural …, 2024 - proceedings.neurips.cc
Minimising upper bounds on the population risk or the generalisation gap has been widely
used in structural risk minimisation (SRM)--this is in particular at the core of PAC-Bayesian …
used in structural risk minimisation (SRM)--this is in particular at the core of PAC-Bayesian …
Generalization bounds for adversarial contrastive learning
Deep networks are well-known to be fragile to adversarial attacks, and adversarial training is
one of the most popular methods used to train a robust model. To take advantage of …
one of the most popular methods used to train a robust model. To take advantage of …
Understanding negative samples in instance discriminative self-supervised representation learning
Instance discriminative self-supervised representation learning has been attracted attention
thanks to its unsupervised nature and informative feature representation for downstream …
thanks to its unsupervised nature and informative feature representation for downstream …
On the surrogate gap between contrastive and supervised losses
Contrastive representation learning encourages data representation to make semantically
similar pairs closer than randomly drawn negative samples, which has been successful in …
similar pairs closer than randomly drawn negative samples, which has been successful in …
Towards generalizable graph contrastive learning: An information theory perspective
Abstract Graph Contrastive Learning (GCL) is increasingly employed in graph
representation learning with the primary aim of learning node/graph representations from a …
representation learning with the primary aim of learning node/graph representations from a …
Federated Learning with Nonvacuous Generalisation Bounds
We introduce a novel strategy to train randomised predictors in federated learning, where
each node of the network aims at preserving its privacy by releasing a local predictor but …
each node of the network aims at preserving its privacy by releasing a local predictor but …