Algorithms and hardness for learning linear thresholds from label proportions
R Saket - Advances in Neural Information Processing …, 2022 - proceedings.neurips.cc
We study the learnability of linear threshold functions (LTFs) in the learning from label
proportions (LLP) framework. In this, the feature-vector classifier is learnt from bags of …
proportions (LLP) framework. In this, the feature-vector classifier is learnt from bags of …
Learnability of linear thresholds from label proportions
R Saket - Advances in Neural Information Processing …, 2021 - proceedings.neurips.cc
We study the problem of properly learning linear threshold functions (LTFs) in the learning
from label proportions (LLP) framework. In this, the learning is on a collection of bags of …
from label proportions (LLP) framework. In this, the learning is on a collection of bags of …
PAC learning linear thresholds from label proportions
A Brahmbhatt, R Saket… - Advances in Neural …, 2024 - proceedings.neurips.cc
Learning from label proportions (LLP) is a generalization of supervised learning in which the
training data is available as sets or bags of feature-vectors (instances) along with the …
training data is available as sets or bags of feature-vectors (instances) along with the …
Polynomial threshold functions, hyperplane arrangements, and random tensors
P Baldi, R Vershynin - SIAM Journal on Mathematics of Data Science, 2019 - SIAM
A simple way to generate a Boolean function is to take the sign of a real polynomial in n
variables. Such Boolean functions are called polynomial threshold functions. How many low …
variables. Such Boolean functions are called polynomial threshold functions. How many low …
Nearly tight bounds for robust proper learning of halfspaces with a margin
I Diakonikolas, D Kane… - Advances in Neural …, 2019 - proceedings.neurips.cc
We study the problem of {\em properly} learning large margin halfspaces in the agnostic
PAC model. In more detail, we study the complexity of properly learning $ d $-dimensional …
PAC model. In more detail, we study the complexity of properly learning $ d $-dimensional …
Efficient Discrepancy Testing for Learning with Distribution Shift
A fundamental notion of distance between train and test distributions from the field of domain
adaptation is discrepancy distance. While in general hard to compute, here we provide the …
adaptation is discrepancy distance. While in general hard to compute, here we provide the …
Unified Binary and Multiclass Margin-Based Classification
The notion of margin loss has been central to the development and analysis of algorithms for
binary classification. To date, however, there remains no consensus as to the analogue of …
binary classification. To date, however, there remains no consensus as to the analogue of …
Polynomial threshold functions for decision lists
V Podolskii, NV Proskurin - arXiv preprint arXiv:2207.09371, 2022 - arxiv.org
For $ S\subseteq\{0, 1\}^ n $ a Boolean function $ f\colon S\to\{-1, 1\} $ is a polynomial
threshold function (PTF) of degree $ d $ and weight $ W $ if there is a polynomial $ p $ with …
threshold function (PTF) of degree $ d $ and weight $ W $ if there is a polynomial $ p $ with …
Hardness of learning DNFs using halfspaces
The problem of learning t-term DNF formulas (for t= O (1)) has been studied extensively in
the PAC model since its introduction by Valiant (STOC 1984). A t-term DNF can be efficiently …
the PAC model since its introduction by Valiant (STOC 1984). A t-term DNF can be efficiently …
Degree-𝑑 chow parameters robustly determine degree-𝑑 PTFs (and algorithmic applications)
I Diakonikolas, DM Kane - Proceedings of the 51st Annual ACM SIGACT …, 2019 - dl.acm.org
The degree-d Chow parameters of a Boolean function are its degree at most d Fourier
coefficients. It is well-known that degree-d Chow parameters uniquely characterize degree-d …
coefficients. It is well-known that degree-d Chow parameters uniquely characterize degree-d …