Statistical indistinguishability of learning algorithms
When two different parties use the same learning rule on their own data, how can we test
whether the distributions of the two outcomes are similar? In this paper, we study the …
whether the distributions of the two outcomes are similar? In this paper, we study the …
Replicable learning of large-margin halfspaces
We provide efficient replicable algorithms for the problem of learning large-margin
halfspaces. Our results improve upon the algorithms provided by Impagliazzo, Lei, Pitassi …
halfspaces. Our results improve upon the algorithms provided by Impagliazzo, Lei, Pitassi …
Coarse-to-fine incremental few-shot learning
Different from fine-tuning models pre-trained on a large-scale dataset of preset classes,
class-incremental learning (CIL) aims to recognize novel classes over time without forgetting …
class-incremental learning (CIL) aims to recognize novel classes over time without forgetting …
Transfer learning beyond bounded density ratios
We study the fundamental problem of transfer learning where a learning algorithm collects
data from some source distribution $ P $ but needs to perform well with respect to a different …
data from some source distribution $ P $ but needs to perform well with respect to a different …
Hyperbolic space with hierarchical margin boosts fine-grained learning from coarse labels
Learning fine-grained embeddings from coarse labels is a challenging task due to limited
label granularity supervision, ie, lacking the detailed distinctions required for fine-grained …
label granularity supervision, ie, lacking the detailed distinctions required for fine-grained …
On the Computational Landscape of Replicable Learning
We study computational aspects of algorithmic replicability, a notion of stability introduced by
Impagliazzo, Lei, Pitassi, and Sorrell [2022]. Motivated by a recent line of work that …
Impagliazzo, Lei, Pitassi, and Sorrell [2022]. Motivated by a recent line of work that …
Perfect sampling from pairwise comparisons
D Fotakis, A Kalavasis… - Advances in Neural …, 2022 - proceedings.neurips.cc
In this work, we study how to efficiently obtain perfect samples from a discrete distribution
$\mathcal {D} $ given access only to pairwise comparisons of elements of its support …
$\mathcal {D} $ given access only to pairwise comparisons of elements of its support …
Learning Hard-Constrained Models with One Sample
We consider the problem of estimating the parameters of a Markov Random Field with hard-
constraints using a single sample. As our main running examples, we use the k-SAT and the …
constraints using a single sample. As our main running examples, we use the k-SAT and the …
Efficient Subclass Segmentation in Medical Images
As research interests in medical image analysis become increasingly fine-grained, the cost
for extensive annotation also rises. One feasible way to reduce the cost is to annotate with …
for extensive annotation also rises. One feasible way to reduce the cost is to annotate with …
Active labeling: streaming stochastic gradients
The workhorse of machine learning is stochastic gradient descent. To access stochastic
gradients, it is common to consider iteratively input/output pairs of a training dataset …
gradients, it is common to consider iteratively input/output pairs of a training dataset …