List-decodable linear regression S Karmalkar, A Klivans, P Kothari Advances in neural information processing systems 32, 2019 | 82 | 2019 |
Superpolynomial lower bounds for learning one-layer neural networks using gradient descent S Goel, A Gollakota, Z Jin, S Karmalkar, A Klivans International Conference on Machine Learning, 3587-3596, 2020 | 76 | 2020 |
Time/accuracy tradeoffs for learning a relu with respect to gaussian marginals S Goel, S Karmalkar, A Klivans Advances in neural information processing systems 32, 2019 | 58 | 2019 |
Approximation schemes for relu regression I Diakonikolas, S Goel, S Karmalkar, AR Klivans, M Soltanolkotabi Conference on learning theory, 1452-1485, 2020 | 56 | 2020 |
Robustly learning any clusterable mixture of gaussians I Diakonikolas, SB Hopkins, D Kane, S Karmalkar arXiv preprint arXiv:2005.06417, 2020 | 51 | 2020 |
Fairness for image generation with uncertain sensitive attributes A Jalal, S Karmalkar, J Hoffmann, A Dimakis, E Price International Conference on Machine Learning, 4721-4732, 2021 | 45 | 2021 |
Instance-optimal compressed sensing via posterior sampling A Jalal, S Karmalkar, AG Dimakis, E Price arXiv preprint arXiv:2106.11438, 2021 | 43 | 2021 |
Outlier-robust high-dimensional sparse estimation via iterative filtering I Diakonikolas, D Kane, S Karmalkar, E Price, A Stewart Advances in Neural Information Processing Systems 32, 2019 | 42 | 2019 |
Compressed sensing with adversarial sparse noise via l1 regression S Karmalkar, E Price arXiv preprint arXiv:1809.08055, 2018 | 36 | 2018 |
Outlier-robust clustering of gaussians and other non-spherical mixtures A Bakshi, I Diakonikolas, SB Hopkins, D Kane, S Karmalkar, PK Kothari 2020 ieee 61st annual symposium on foundations of computer science (focs …, 2020 | 33 | 2020 |
Robust polynomial regression up to the information theoretic limit D Kane, S Karmalkar, E Price 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS …, 2017 | 18 | 2017 |
Robust sparse mean estimation via sum of squares I Diakonikolas, DM Kane, S Karmalkar, A Pensia, T Pittas Conference on Learning Theory, 4703-4763, 2022 | 17 | 2022 |
On the power of compressed sensing with generative models A Kamath, E Price, S Karmalkar International Conference on Machine Learning, 5101-5109, 2020 | 17 | 2020 |
Lower bounds for compressed sensing with generative models A Kamath, S Karmalkar, E Price arXiv preprint arXiv:1912.02938, 2019 | 15 | 2019 |
List-decodable sparse mean estimation via difference-of-pairs filtering I Diakonikolas, D Kane, S Karmalkar, A Pensia, T Pittas Advances in Neural Information Processing Systems 35, 13947-13960, 2022 | 11 | 2022 |
Multi-model 3d registration: Finding multiple moving objects in cluttered point clouds D Jin, S Karmalkar, H Zhang, L Carlone arXiv preprint arXiv:2402.10865, 2024 | 6 | 2024 |
Fourier entropy-influence conjecture for random linear threshold functions S Chakraborty, S Karmalkar, S Kundu, SV Lokam, N Saurabh LATIN 2018: Theoretical Informatics: 13th Latin American Symposium, Buenos …, 2018 | 5 | 2018 |
Compressed sensing with approximate priors via conditional resampling A Jalal, S Karmalkar, A Dimakis, E Price NeurIPS 2020 Workshop on Deep Learning and Inverse Problems, 2020 | 4 | 2020 |
Distribution-Independent Regression for Generalized Linear Models with Oblivious Corruptions I Diakonikolas, S Karmalkar, JH Park, C Tzamos The Thirty Sixth Annual Conference on Learning Theory, 5453-5475, 2023 | 1 | 2023 |
The polynomial method is universal for distribution-free correlational SQ learning A Gollakota, S Karmalkar, A Klivans arXiv preprint arXiv:2010.11925, 2020 | 1 | 2020 |