关注
Sushrut Karmalkar
Sushrut Karmalkar
University of Wisconsin-Madison
在 cs.utexas.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
List-decodable linear regression
S Karmalkar, A Klivans, P Kothari
Advances in neural information processing systems 32, 2019
822019
Superpolynomial lower bounds for learning one-layer neural networks using gradient descent
S Goel, A Gollakota, Z Jin, S Karmalkar, A Klivans
International Conference on Machine Learning, 3587-3596, 2020
762020
Time/accuracy tradeoffs for learning a relu with respect to gaussian marginals
S Goel, S Karmalkar, A Klivans
Advances in neural information processing systems 32, 2019
582019
Approximation schemes for relu regression
I Diakonikolas, S Goel, S Karmalkar, AR Klivans, M Soltanolkotabi
Conference on learning theory, 1452-1485, 2020
562020
Robustly learning any clusterable mixture of gaussians
I Diakonikolas, SB Hopkins, D Kane, S Karmalkar
arXiv preprint arXiv:2005.06417, 2020
512020
Fairness for image generation with uncertain sensitive attributes
A Jalal, S Karmalkar, J Hoffmann, A Dimakis, E Price
International Conference on Machine Learning, 4721-4732, 2021
452021
Instance-optimal compressed sensing via posterior sampling
A Jalal, S Karmalkar, AG Dimakis, E Price
arXiv preprint arXiv:2106.11438, 2021
432021
Outlier-robust high-dimensional sparse estimation via iterative filtering
I Diakonikolas, D Kane, S Karmalkar, E Price, A Stewart
Advances in Neural Information Processing Systems 32, 2019
422019
Compressed sensing with adversarial sparse noise via l1 regression
S Karmalkar, E Price
arXiv preprint arXiv:1809.08055, 2018
362018
Outlier-robust clustering of gaussians and other non-spherical mixtures
A Bakshi, I Diakonikolas, SB Hopkins, D Kane, S Karmalkar, PK Kothari
2020 ieee 61st annual symposium on foundations of computer science (focs …, 2020
332020
Robust polynomial regression up to the information theoretic limit
D Kane, S Karmalkar, E Price
2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS …, 2017
182017
Robust sparse mean estimation via sum of squares
I Diakonikolas, DM Kane, S Karmalkar, A Pensia, T Pittas
Conference on Learning Theory, 4703-4763, 2022
172022
On the power of compressed sensing with generative models
A Kamath, E Price, S Karmalkar
International Conference on Machine Learning, 5101-5109, 2020
172020
Lower bounds for compressed sensing with generative models
A Kamath, S Karmalkar, E Price
arXiv preprint arXiv:1912.02938, 2019
152019
List-decodable sparse mean estimation via difference-of-pairs filtering
I Diakonikolas, D Kane, S Karmalkar, A Pensia, T Pittas
Advances in Neural Information Processing Systems 35, 13947-13960, 2022
112022
Multi-model 3d registration: Finding multiple moving objects in cluttered point clouds
D Jin, S Karmalkar, H Zhang, L Carlone
arXiv preprint arXiv:2402.10865, 2024
62024
Fourier entropy-influence conjecture for random linear threshold functions
S Chakraborty, S Karmalkar, S Kundu, SV Lokam, N Saurabh
LATIN 2018: Theoretical Informatics: 13th Latin American Symposium, Buenos …, 2018
52018
Compressed sensing with approximate priors via conditional resampling
A Jalal, S Karmalkar, A Dimakis, E Price
NeurIPS 2020 Workshop on Deep Learning and Inverse Problems, 2020
42020
Distribution-Independent Regression for Generalized Linear Models with Oblivious Corruptions
I Diakonikolas, S Karmalkar, JH Park, C Tzamos
The Thirty Sixth Annual Conference on Learning Theory, 5453-5475, 2023
12023
The polynomial method is universal for distribution-free correlational SQ learning
A Gollakota, S Karmalkar, A Klivans
arXiv preprint arXiv:2010.11925, 2020
12020
系统目前无法执行此操作,请稍后再试。
文章 1–20