Meta-learning with task-adaptive loss function for few-shot learning

S Baik, J Choi, H Kim, D Cho, J Min… - Proceedings of the …, 2021 - openaccess.thecvf.com
In few-shot learning scenarios, the challenge is to generalize and perform well on new
unseen examples when only very few labeled examples are available for each task. Model …

Fast finite width neural tangent kernel

R Novak, J Sohl-Dickstein… - … on Machine Learning, 2022 - proceedings.mlr.press
Abstract The Neural Tangent Kernel (NTK), defined as the outer product of the neural
network (NN) Jacobians, has emerged as a central object of study in deep learning. In the …

Making look-ahead active learning strategies feasible with neural tangent kernels

MA Mohamadi, W Bae… - Advances in Neural …, 2022 - proceedings.neurips.cc
We propose a new method for approximating active learning acquisition strategies that are
based on retraining with hypothetically-labeled candidate data points. Although this is …

Learning to learn and remember super long multi-domain task sequence

Z Wang, L Shen, T Duan, D Zhan… - Proceedings of the …, 2022 - openaccess.thecvf.com
Catastrophic forgetting (CF) frequently occurs when learning with non-stationary data
distribution. The CF issue remains nearly unexplored and is more challenging when meta …

Fast neural kernel embeddings for general activations

I Han, A Zandieh, J Lee, R Novak… - Advances in neural …, 2022 - proceedings.neurips.cc
Infinite width limit has shed light on generalization and optimization aspects of deep learning
by establishing connections between neural networks and kernel methods. Despite their …

A fast, well-founded approximation to the empirical neural tangent kernel

MA Mohamadi, W Bae… - … Conference on Machine …, 2023 - proceedings.mlr.press
Empirical neural tangent kernels (eNTKs) can provide a good understanding of a given
network's representation: they are often far less expensive to compute and applicable more …

Meta-learning without data via wasserstein distributionally-robust model fusion

Z Wang, X Wang, L Shen, Q Suo… - Uncertainty in …, 2022 - proceedings.mlr.press
Existing meta-learning works assume that each task has available training and testing data.
However, there are many available pre-trained models without accessing their training data …

Learning to learn from APIs: black-box data-free meta-learning

Z Hu, L Shen, Z Wang, B Wu… - … on Machine Learning, 2023 - proceedings.mlr.press
Data-free meta-learning (DFML) aims to enable efficient learning of new tasks by meta-
learning from a collection of pre-trained models without access to the training data. Existing …

Meta-learning with less forgetting on large-scale non-stationary task distributions

Z Wang, L Shen, L Fang, Q Suo, D Zhan… - … on Computer Vision, 2022 - Springer
The paradigm of machine intelligence moves from purely supervised learning to a more
practical scenario when many loosely related unlabeled data are available and labeled data …

Few-shot backdoor attacks via neural tangent kernels

J Hayase, S Oh - arXiv preprint arXiv:2210.05929, 2022 - arxiv.org
In a backdoor attack, an attacker injects corrupted examples into the training set. The goal of
the attacker is to cause the final trained model to predict the attacker's desired target label …