If influence functions are the answer, then what is the question?
Influence functions efficiently estimate the effect of removing a single training data point on a
model's learned parameters. While influence estimates align well with leave-one-out …
model's learned parameters. While influence estimates align well with leave-one-out …
Towards last-layer retraining for group robustness with fewer annotations
T LaBonte, V Muthukumar… - Advances in Neural …, 2024 - proceedings.neurips.cc
Empirical risk minimization (ERM) of neural networks is prone to over-reliance on spurious
correlations and poor generalization on minority groups. The recent deep feature …
correlations and poor generalization on minority groups. The recent deep feature …
Robust learning with progressive data expansion against spurious correlation
While deep learning models have shown remarkable performance in various tasks, they are
susceptible to learning non-generalizable _spurious features_ rather than the core features …
susceptible to learning non-generalizable _spurious features_ rather than the core features …
Calibrating multi-modal representations: A pursuit of group robustness without annotations
Fine-tuning pre-trained vision-language models, like CLIP, has yielded success on diverse
downstream tasks. However, several pain points persist for this paradigm:(i) directly tuning …
downstream tasks. However, several pain points persist for this paradigm:(i) directly tuning …
Survey: Leakage and privacy at inference time
Leakage of data from publicly available Machine Learning (ML) models is an area of
growing significance since commercial and government applications of ML can draw on …
growing significance since commercial and government applications of ML can draw on …
On strengthening and defending graph reconstruction attack with markov chain approximation
Although powerful graph neural networks (GNNs) have boosted numerous real-world
applications, the potential privacy risk is still underexplored. To close this gap, we perform …
applications, the potential privacy risk is still underexplored. To close this gap, we perform …
Neural networks memorise personal information from one sample
Deep neural networks (DNNs) have achieved high accuracy in diagnosing multiple
diseases/conditions at a large scale. However, a number of concerns have been raised …
diseases/conditions at a large scale. However, a number of concerns have been raised …
Stability, generalization and privacy: Precise analysis for random and NTK features
S Bombari, M Mondelli - arXiv preprint arXiv:2305.12100, 2023 - arxiv.org
Deep learning models can be vulnerable to recovery attacks, raising privacy concerns to
users, and widespread algorithms such as empirical risk minimization (ERM) often do not …
users, and widespread algorithms such as empirical risk minimization (ERM) often do not …
The Pitfalls of Memorization: When Memorization Hurts Generalization
R Bayat, M Pezeshki, E Dohmatob… - arXiv preprint arXiv …, 2024 - arxiv.org
Neural networks often learn simple explanations that fit the majority of the data while
memorizing exceptions that deviate from these explanations. This behavior leads to poor …
memorizing exceptions that deviate from these explanations. This behavior leads to poor …
A machine learning approach to identifying suicide risk among text-based crisis counseling encounters
M Broadbent, M Medina Grespan, K Axford… - Frontiers in …, 2023 - frontiersin.org
Introduction With the increasing utilization of text-based suicide crisis counseling, new
means of identifying at risk clients must be explored. Natural language processing (NLP) …
means of identifying at risk clients must be explored. Natural language processing (NLP) …