A systematic review of graph neural network in healthcare-based applications: Recent advances, trends, and future directions

SG Paul, A Saha, MZ Hasan, SRH Noori… - IEEE …, 2024 - ieeexplore.ieee.org
Graph neural network (GNN) is a formidable deep learning framework that enables the
analysis and modeling of intricate relationships present in data structured as graphs. In …

Neural networks trained with SGD learn distributions of increasing complexity

M Refinetti, A Ingrosso, S Goldt - … Conference on Machine …, 2023 - proceedings.mlr.press
The uncanny ability of over-parameterised neural networks to generalise well has been
explained using various" simplicity biases". These theories postulate that neural networks …

Are Gaussian data all you need? The extents and limits of universality in high-dimensional generalized linear estimation

L Pesce, F Krzakala, B Loureiro… - … on Machine Learning, 2023 - proceedings.mlr.press
In this manuscript we consider the problem of generalized linear estimation on Gaussian
mixture data with labels given by a single-index model. Our first result is a sharp asymptotic …

Universality laws for gaussian mixtures in generalized linear models

Y Dandi, L Stephan, F Krzakala… - Advances in …, 2024 - proceedings.neurips.cc
A recent line of work in high-dimensional statistics working under the Gaussian mixture
hypothesis has led to a number of results in the context of empirical risk minimization …

Graph-based approximate message passing iterations

C Gerbelot, R Berthier - Information and Inference: A Journal of …, 2023 - academic.oup.com
Approximate message passing (AMP) algorithms have become an important element of high-
dimensional statistical inference, mostly due to their adaptability and concentration …

High-dimensional asymptotics of denoising autoencoders

H Cui, L Zdeborová - Advances in Neural Information …, 2024 - proceedings.neurips.cc
We address the problem of denoising data from a Gaussian mixture using a two-layer non-
linear autoencoder with tied weights and a skip connection. We consider the high …

Fluctuations, bias, variance & ensemble of learners: Exact asymptotics for convex losses in high-dimension

B Loureiro, C Gerbelot, M Refinetti… - International …, 2022 - proceedings.mlr.press
From the sampling of data to the initialisation of parameters, randomness is ubiquitous in
modern Machine Learning practice. Understanding the statistical fluctuations engendered …

On double-descent in uncertainty quantification in overparametrized models

L Clarté, B Loureiro, F Krzakala… - International …, 2023 - proceedings.mlr.press
Uncertainty quantification is a central challenge in reliable and trustworthy machine
learning. Naive measures such as last-layer scores are well-known to yield overconfident …

Asymptotics of feature learning in two-layer networks after one gradient-step

H Cui, L Pesce, Y Dandi, F Krzakala, YM Lu… - arXiv preprint arXiv …, 2024 - arxiv.org
In this manuscript we investigate the problem of how two-layer neural networks learn
features from data, and improve over the kernel regime, after being trained with a single …

Multinomial logistic regression: Asymptotic normality on null covariates in high-dimensions

K Tan, PC Bellec - Advances in Neural Information …, 2024 - proceedings.neurips.cc
This paper investigates the asymptotic distribution of the maximum-likelihood estimate
(MLE) in multinomial logistic models in the high-dimensional regime where dimension and …