Low-rank matrix completion using alternating minimization P Jain, P Netrapalli, S Sanghavi Proceedings of the forty-fifth annual ACM symposium on Theory of computing …, 2013 | 1204 | 2013 |
How to escape saddle points efficiently C Jin, R Ge, P Netrapalli, SM Kakade, MI Jordan International conference on machine learning, 1724-1732, 2017 | 948 | 2017 |
Phase retrieval using alternating minimization P Netrapalli, P Jain, S Sanghavi Advances in Neural Information Processing Systems 26, 2013 | 707 | 2013 |
Morel: Model-based offline reinforcement learning R Kidambi, A Rajeswaran, P Netrapalli, T Joachims Advances in neural information processing systems 33, 21810-21823, 2020 | 652 | 2020 |
What is local optimality in nonconvex-nonconcave minimax optimization? C Jin, P Netrapalli, M Jordan International conference on machine learning, 4880-4889, 2020 | 423* | 2020 |
Non-convex robust PCA P Netrapalli, N UN, S Sanghavi, A Anandkumar, P Jain Advances in neural information processing systems 27, 2014 | 353 | 2014 |
The pitfalls of simplicity bias in neural networks H Shah, K Tamuly, A Raghunathan, P Jain, P Netrapalli Advances in Neural Information Processing Systems 33, 9573-9585, 2020 | 332 | 2020 |
Accelerated gradient descent escapes saddle points faster than gradient descent C Jin, P Netrapalli, MI Jordan Conference On Learning Theory, 1042-1085, 2018 | 281 | 2018 |
On nonconvex optimization for machine learning: Gradients, stochasticity, and saddle points C Jin, P Netrapalli, R Ge, SM Kakade, MI Jordan Journal of the ACM (JACM) 68 (2), 1-29, 2021 | 228* | 2021 |
Learning the graph of epidemic cascades P Netrapalli, S Sanghavi ACM SIGMETRICS Performance Evaluation Review 40 (1), 211-222, 2012 | 228 | 2012 |
Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford Journal of machine learning research 18 (223), 1-42, 2018 | 207* | 2018 |
Efficient algorithms for smooth minimax optimization KK Thekumparampil, P Jain, P Netrapalli, S Oh Advances in Neural Information Processing Systems 32, 2019 | 197 | 2019 |
Efficient domain generalization via common-specific low-rank decomposition V Piratla, P Netrapalli, S Sarawagi International Conference on Machine Learning, 7728-7738, 2020 | 194 | 2020 |
Learning sparsely used overcomplete dictionaries via alternating minimization A Agarwal, A Anandkumar, P Jain, P Netrapalli SIAM Journal on Optimization 26 (4), 2775-2799, 2016 | 194 | 2016 |
The step decay schedule: A near optimal, geometrically decaying learning rate procedure for least squares R Ge, SM Kakade, R Kidambi, P Netrapalli Advances in neural information processing systems 32, 2019 | 171 | 2019 |
Information-theoretic thresholds for community detection in sparse networks J Banks, C Moore, J Neeman, P Netrapalli Conference on Learning Theory, 383-416, 2016 | 156* | 2016 |
Streaming pca: Matching matrix bernstein and near-optimal finite sample guarantees for oja’s algorithm P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford Conference on learning theory, 1147-1164, 2016 | 150 | 2016 |
A short note on concentration inequalities for random vectors with subgaussian norm C Jin, P Netrapalli, R Ge, SM Kakade, MI Jordan arXiv preprint arXiv:1902.03736, 2019 | 145 | 2019 |
On the insufficiency of existing momentum schemes for stochastic optimization R Kidambi, P Netrapalli, P Jain, S Kakade 2018 Information Theory and Applications Workshop (ITA), 1-9, 2018 | 127 | 2018 |
Learning sparsely used overcomplete dictionaries A Agarwal, A Anandkumar, P Jain, P Netrapalli, R Tandon Conference on Learning Theory, 123-137, 2014 | 124 | 2014 |