Task-specific skill localization in fine-tuned language models
Pre-trained language models can be fine-tuned to solve diverse NLP tasks, including in few-
shot settings. Thus fine-tuning allows the model to quickly pick up task-specific" skills," but …
shot settings. Thus fine-tuning allows the model to quickly pick up task-specific" skills," but …
Rankfeat: Rank-1 feature removal for out-of-distribution detection
The task of out-of-distribution (OOD) detection is crucial for deploying machine learning
models in real-world settings. In this paper, we observe that the singular value distributions …
models in real-world settings. In this paper, we observe that the singular value distributions …
Advancing model pruning via bi-level optimization
The deployment constraints in practical applications necessitate the pruning of large-scale
deep learning models, ie, promoting their weight sparsity. As illustrated by the Lottery Ticket …
deep learning models, ie, promoting their weight sparsity. As illustrated by the Lottery Ticket …
Model sparsity can simplify machine unlearning
In response to recent data regulation requirements, machine unlearning (MU) has emerged
as a critical process to remove the influence of specific examples from a given model …
as a critical process to remove the influence of specific examples from a given model …
Compute-efficient deep learning: Algorithmic trends and opportunities
BR Bartoldson, B Kailkhura, D Blalock - Journal of Machine Learning …, 2023 - jmlr.org
Although deep learning has made great progress in recent years, the exploding economic
and environmental costs of training neural networks are becoming unsustainable. To …
and environmental costs of training neural networks are becoming unsustainable. To …
Improving robustness of vision transformers by reducing sensitivity to patch corruptions
Despite their success, vision transformers still remain vulnerable to image corruptions, such
as noise or blur. Indeed, we find that the vulnerability mainly stems from the unstable self …
as noise or blur. Indeed, we find that the vulnerability mainly stems from the unstable self …
Prime: A few primitives can boost robustness to common corruptions
Despite their impressive performance on image classification tasks, deep networks have a
hard time generalizing to unforeseen corruptions of their data. To fix this vulnerability, prior …
hard time generalizing to unforeseen corruptions of their data. To fix this vulnerability, prior …
Defending against image corruptions through adversarial augmentations
Modern neural networks excel at image classification, yet they remain vulnerable to common
image corruptions such as blur, speckle noise or fog. Recent methods that focus on this …
image corruptions such as blur, speckle noise or fog. Recent methods that focus on this …
Benchmark generation framework with customizable distortions for image classifier robustness
We present a novel framework for generating adversarial benchmarks to evaluate the
robustness of image classification models. The RLAB framework allows users to customize …
robustness of image classification models. The RLAB framework allows users to customize …
Dimensionality reduced training by pruning and freezing parts of a deep neural network: a survey
State-of-the-art deep learning models have a parameter count that reaches into the billions.
Training, storing and transferring such models is energy and time consuming, thus costly. A …
Training, storing and transferring such models is energy and time consuming, thus costly. A …