Newton sketch: A near linear-time optimization algorithm with linear-quadratic convergence M Pilanci, MJ Wainwright SIAM Journal on Optimization 27 (1), 205-245, 2017 | 320 | 2017 |
Iterative Hessian sketch: Fast and accurate solution approximation for constrained least-squares M Pilanci, MJ Wainwright Journal of Machine Learning Research 17 (53), 1-38, 2016 | 239 | 2016 |
Randomized sketches of convex programs with sharp guarantees M Pilanci, MJ Wainwright IEEE Transactions on Information Theory 61 (9), 5096-5115, 2015 | 200 | 2015 |
Randomized sketches for kernels: Fast and optimal nonparametric regression Y Yang, M Pilanci, MJ Wainwright | 183 | 2017 |
Neural networks are convex regularizers: Exact polynomial-time convex optimization formulations for two-layer networks M Pilanci, T Ergen International Conference on Machine Learning, 7695-7705, 2020 | 101 | 2020 |
Sparse learning via Boolean relaxations M Pilanci, MJ Wainwright, L El Ghaoui Mathematical Programming 151 (1), 63-87, 2015 | 85 | 2015 |
Revealing the structure of deep neural networks via convex duality T Ergen, M Pilanci International Conference on Machine Learning, 3004-3014, 2021 | 71 | 2021 |
Recovery of sparse probability measures via convex programming M Pilanci, L Ghaoui, V Chandrasekaran Advances in Neural Information Processing Systems 25, 2012 | 65 | 2012 |
Convex geometry and duality of over-parameterized neural networks T Ergen, M Pilanci Journal of machine learning research 22 (212), 1-63, 2021 | 55 | 2021 |
Implicit convex regularizers of cnn architectures: Convex optimization of two-and three-layer networks in polynomial time T Ergen, M Pilanci arXiv preprint arXiv:2006.14798, 2020 | 44 | 2020 |
Vector-output relu neural network problems are copositive programs: Convex analysis of two layer networks and polynomial-time algorithms A Sahiner, T Ergen, J Pauly, M Pilanci arXiv preprint arXiv:2012.13329, 2020 | 40 | 2020 |
Global optimality beyond two layers: Training deep relu networks via convex programs T Ergen, M Pilanci International Conference on Machine Learning, 2993-3003, 2021 | 35 | 2021 |
Randomized sketches for kernels: Fast and optimal non-parametric regression Y Yang, M Pilanci, MJ Wainwright arXiv preprint arXiv:1501.06195, 2015 | 35 | 2015 |
Optimal randomized first-order methods for least-squares problems J Lacotte, M Pilanci International Conference on Machine Learning, 5587-5597, 2020 | 32 | 2020 |
Convex geometry of two-layer relu networks: Implicit autoencoding and interpretable models T Ergen, M Pilanci International Conference on Artificial Intelligence and Statistics, 4024-4033, 2020 | 31 | 2020 |
Demystifying batch normalization in relu networks: Equivalent convex optimization models and implicit regularization T Ergen, A Sahiner, B Ozturkler, J Pauly, M Mardani, M Pilanci arXiv preprint arXiv:2103.01499, 2021 | 30 | 2021 |
Debiasing distributed second order optimization with surrogate sketching and scaled regularization M Derezinski, B Bartan, M Pilanci, MW Mahoney Advances in Neural Information Processing Systems 33, 6684-6695, 2020 | 29 | 2020 |
Structured least squares problems and robust estimators M Pilanci, O Arikan, MC Pinar IEEE transactions on signal processing 58 (5), 2453-2465, 2010 | 29 | 2010 |
Fast convex optimization for two-layer relu networks: Equivalent model classes and cone decompositions A Mishkin, A Sahiner, M Pilanci International Conference on Machine Learning, 15770-15816, 2022 | 27 | 2022 |
Newton-LESS: Sparsification without trade-offs for the sketched Newton update M Derezinski, J Lacotte, M Pilanci, MW Mahoney Advances in Neural Information Processing Systems 34, 2835-2847, 2021 | 27 | 2021 |