PAC-Bayes compression bounds so tight that they can explain generalization

S Lotfi, M Finzi, S Kapoor… - Advances in …, 2022 - proceedings.neurips.cc
While there has been progress in developing non-vacuous generalization bounds for deep
neural networks, these bounds tend to be uninformative about why deep learning works. In …

Reconstruction for powerful graph representations

L Cotta, C Morris, B Ribeiro - Advances in Neural …, 2021 - proceedings.neurips.cc
Graph neural networks (GNNs) have limited expressive power, failing to represent many
graph classes correctly. While more expressive graph representation learning (GRL) …

Improving self-supervised learning by characterizing idealized representations

Y Dubois, S Ermon, TB Hashimoto… - Advances in Neural …, 2022 - proceedings.neurips.cc
Despite the empirical successes of self-supervised learning (SSL) methods, it is unclear
what characteristics of their representations lead to high downstream accuracies. In this …

Probabilistic symmetries and invariant neural networks

B Bloem-Reddy, Y Whye - Journal of Machine Learning Research, 2020 - jmlr.org
Treating neural network inputs and outputs as random variables, we characterize the
structure of neural networks that can be used to model data that are invariant or equivariant …

Lossy compression for lossless prediction

Y Dubois, B Bloem-Reddy, K Ullrich… - Advances in Neural …, 2021 - proceedings.neurips.cc
Most data is automatically collected and only ever" seen" by algorithms. Yet, data
compressors preserve perceptual fidelity rather than just the information needed by …

Out-of-domain robustness via targeted augmentations

I Gao, S Sagawa, PW Koh… - International …, 2023 - proceedings.mlr.press
Abstract Models trained on one set of domains often suffer performance drops on unseen
domains, eg, when wildlife monitoring models are deployed in new camera locations. In this …

Approximately equivariant graph networks

N Huang, R Levie, S Villar - Advances in Neural …, 2024 - proceedings.neurips.cc
Graph neural networks (GNNs) are commonly described as being permutation equivariant
with respect to node relabeling in the graph. This symmetry of GNNs is often compared to …

Causally motivated shortcut removal using auxiliary labels

M Makar, B Packer, D Moldovan… - International …, 2022 - proceedings.mlr.press
Shortcut learning, in which models make use of easy-to-represent but unstable associations,
is a major failure mode for robust machine learning. We study a flexible, causally-motivated …

Approximation-generalization trade-offs under (approximate) group equivariance

M Petrache, S Trivedi - Advances in Neural Information …, 2023 - proceedings.neurips.cc
The explicit incorporation of task-specific inductive biases through symmetry has emerged
as a general design precept in the development of high-performance machine learning …

HYTREL: Hypergraph-enhanced tabular data representation learning

P Chen, S Sarkar, L Lausen… - Advances in …, 2024 - proceedings.neurips.cc
Abstract Language models pretrained on large collections of tabular data have
demonstrated their effectiveness in several downstream tasks. However, many of these …