User-friendly introduction to PAC-Bayes bounds

P Alquier - Foundations and Trends® in Machine Learning, 2024 - nowpublishers.com
Aggregated predictors are obtained by making a set of basic predictors vote according to
some weights, that is, to some probability distribution. Randomized predictors are obtained …

Improving self-supervised learning by characterizing idealized representations

Y Dubois, S Ermon, TB Hashimoto… - Advances in Neural …, 2022 - proceedings.neurips.cc
Despite the empirical successes of self-supervised learning (SSL) methods, it is unclear
what characteristics of their representations lead to high downstream accuracies. In this …

Robustness verification for contrastive learning

Z Wang, W Liu - International Conference on Machine …, 2022 - proceedings.mlr.press
Contrastive adversarial training has successfully improved the robustness of contrastive
learning (CL). However, the robustness metric used in these methods is linked to attack …

Generalization bounds: Perspectives from information theory and PAC-Bayes

F Hellström, G Durisi, B Guedj, M Raginsky - arXiv preprint arXiv …, 2023 - arxiv.org
A fundamental question in theoretical machine learning is generalization. Over the past
decades, the PAC-Bayesian approach has been established as a flexible framework to …

Learning via Wasserstein-based high probability generalisation bounds

P Viallard, M Haddouche… - Advances in Neural …, 2024 - proceedings.neurips.cc
Minimising upper bounds on the population risk or the generalisation gap has been widely
used in structural risk minimisation (SRM)--this is in particular at the core of PAC-Bayesian …

Generalization bounds for adversarial contrastive learning

X Zou, W Liu - Journal of Machine Learning Research, 2023 - jmlr.org
Deep networks are well-known to be fragile to adversarial attacks, and adversarial training is
one of the most popular methods used to train a robust model. To take advantage of …

Understanding negative samples in instance discriminative self-supervised representation learning

K Nozawa, I Sato - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Instance discriminative self-supervised representation learning has been attracted attention
thanks to its unsupervised nature and informative feature representation for downstream …

On the surrogate gap between contrastive and supervised losses

H Bao, Y Nagano, K Nozawa - International Conference on …, 2022 - proceedings.mlr.press
Contrastive representation learning encourages data representation to make semantically
similar pairs closer than randomly drawn negative samples, which has been successful in …

Towards generalizable graph contrastive learning: An information theory perspective

Y Yuan, B Xu, H Shen, Q Cao, K Cen, W Zheng… - Neural Networks, 2024 - Elsevier
Abstract Graph Contrastive Learning (GCL) is increasingly employed in graph
representation learning with the primary aim of learning node/graph representations from a …

Federated Learning with Nonvacuous Generalisation Bounds

P Jobic, M Haddouche, B Guedj - arXiv preprint arXiv:2310.11203, 2023 - arxiv.org
We introduce a novel strategy to train randomised predictors in federated learning, where
each node of the network aims at preserving its privacy by releasing a local predictor but …