If influence functions are the answer, then what is the question?

J Bae, N Ng, A Lo, M Ghassemi… - Advances in Neural …, 2022 - proceedings.neurips.cc
Influence functions efficiently estimate the effect of removing a single training data point on a
model's learned parameters. While influence estimates align well with leave-one-out …

Towards last-layer retraining for group robustness with fewer annotations

T LaBonte, V Muthukumar… - Advances in Neural …, 2024 - proceedings.neurips.cc
Empirical risk minimization (ERM) of neural networks is prone to over-reliance on spurious
correlations and poor generalization on minority groups. The recent deep feature …

Robust learning with progressive data expansion against spurious correlation

Y Deng, Y Yang, B Mirzasoleiman… - Advances in neural …, 2024 - proceedings.neurips.cc
While deep learning models have shown remarkable performance in various tasks, they are
susceptible to learning non-generalizable _spurious features_ rather than the core features …

Calibrating multi-modal representations: A pursuit of group robustness without annotations

C You, Y Mint, W Dai, JS Sekhon… - 2024 IEEE/CVF …, 2024 - ieeexplore.ieee.org
Fine-tuning pre-trained vision-language models, like CLIP, has yielded success on diverse
downstream tasks. However, several pain points persist for this paradigm:(i) directly tuning …

Survey: Leakage and privacy at inference time

M Jegorova, C Kaul, C Mayor, AQ O'Neil… - … on Pattern Analysis …, 2022 - ieeexplore.ieee.org
Leakage of data from publicly available Machine Learning (ML) models is an area of
growing significance since commercial and government applications of ML can draw on …

On strengthening and defending graph reconstruction attack with markov chain approximation

Z Zhou, C Zhou, X Li, J Yao, Q Yao, B Han - arXiv preprint arXiv …, 2023 - arxiv.org
Although powerful graph neural networks (GNNs) have boosted numerous real-world
applications, the potential privacy risk is still underexplored. To close this gap, we perform …

Neural networks memorise personal information from one sample

J Hartley, PP Sanchez, F Haider, SA Tsaftaris - Scientific Reports, 2023 - nature.com
Deep neural networks (DNNs) have achieved high accuracy in diagnosing multiple
diseases/conditions at a large scale. However, a number of concerns have been raised …

Stability, generalization and privacy: Precise analysis for random and NTK features

S Bombari, M Mondelli - arXiv preprint arXiv:2305.12100, 2023 - arxiv.org
Deep learning models can be vulnerable to recovery attacks, raising privacy concerns to
users, and widespread algorithms such as empirical risk minimization (ERM) often do not …

The Pitfalls of Memorization: When Memorization Hurts Generalization

R Bayat, M Pezeshki, E Dohmatob… - arXiv preprint arXiv …, 2024 - arxiv.org
Neural networks often learn simple explanations that fit the majority of the data while
memorizing exceptions that deviate from these explanations. This behavior leads to poor …

A machine learning approach to identifying suicide risk among text-based crisis counseling encounters

M Broadbent, M Medina Grespan, K Axford… - Frontiers in …, 2023 - frontiersin.org
Introduction With the increasing utilization of text-based suicide crisis counseling, new
means of identifying at risk clients must be explored. Natural language processing (NLP) …