Relatively smooth convex optimization by first-order methods, and applications H Lu, RM Freund, Y Nesterov SIAM Journal on Optimization 28 (1), 333-354, 2018 | 378 | 2018 |
The best of many worlds: Dual mirror descent for online allocation problems SR Balseiro, H Lu, V Mirrokni Operations Research 71 (1), 101-119, 2023 | 140* | 2023 |
Depth creates no bad local minima H Lu, K Kawaguchi arXiv preprint arXiv:1702.08580, 2017 | 123 | 2017 |
“relative continuity” for non-lipschitz nonsmooth convex optimization using stochastic (or deterministic) mirror descent H Lu INFORMS Journal on Optimization 1 (4), 288-303, 2019 | 78 | 2019 |
Ordered sgd: A new stochastic optimization framework for empirical risk minimization K Kawaguchi, H Lu International Conference on Artificial Intelligence and Statistics, 669-679, 2020 | 67 | 2020 |
Practical large-scale linear programming using primal-dual hybrid gradient D Applegate, M Díaz, O Hinder, H Lu, M Lubin, B O'Donoghue, W Schudy Advances in Neural Information Processing Systems 34, 20243-20257, 2021 | 60 | 2021 |
Regularized online allocation problems: Fairness and beyond S Balseiro, H Lu, V Mirrokni International Conference on Machine Learning, 630-639, 2021 | 47 | 2021 |
Faster first-order primal-dual methods for linear programming using restarts and sharpness D Applegate, O Hinder, H Lu, M Lubin Mathematical Programming 201 (1), 133-184, 2023 | 41 | 2023 |
Accelerating gradient boosting machines H Lu, SP Karimireddy, N Ponomareva, V Mirrokni International conference on artificial intelligence and statistics, 516-526, 2020 | 40 | 2020 |
Randomized gradient boosting machine H Lu, R Mazumder SIAM Journal on Optimization 30 (4), 2780-2808, 2020 | 37 | 2020 |
The landscape of the proximal point method for nonconvex–nonconcave minimax optimization B Grimmer, H Lu, P Worah, V Mirrokni Mathematical Programming 201 (1), 373-407, 2023 | 36* | 2023 |
Accelerating Greedy Coordinate Descent Methods H Lu, R Freund, V Mirrokni International Conference on Machine Learning, 3257-3266, 2018 | 36 | 2018 |
New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure RM Freund, H Lu Mathematical Programming 170, 445-477, 2018 | 35 | 2018 |
Generalized stochastic frank–wolfe algorithm with stochastic “substitute” gradient for structured convex optimization H Lu, RM Freund Mathematical Programming 187 (1), 317-349, 2021 | 33 | 2021 |
An -Resolution ODE Framework for Discrete-Time Optimization Algorithms and Applications to the Linear Convergence of Minimax Problems H Lu Mathematical Programming 194, 1061-1112, 2022 | 32* | 2022 |
Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions S Wang, W Zhou, H Lu, A Maleki, V Mirrokni International Conference on Machine Learning, 5228-5237, 2018 | 28 | 2018 |
Infeasibility detection with primal-dual hybrid gradient for large-scale linear programming D Applegate, M Díaz, H Lu, M Lubin SIAM Journal on Optimization 34 (1), 459-484, 2024 | 22 | 2024 |
Approximate leave-one-out for high-dimensional non-differentiable learning problems S Wang, W Zhou, A Maleki, H Lu, V Mirrokni arXiv preprint arXiv:1810.02716, 2018 | 19 | 2018 |
On the linear convergence of extragradient methods for nonconvex–nonconcave minimax problems S Hajizadeh, H Lu, B Grimmer INFORMS Journal on Optimization 6 (1), 19-31, 2024 | 9 | 2024 |
Limiting behaviors of nonconvex-nonconcave minimax optimization via continuous-time systems B Grimmer, H Lu, P Worah, V Mirrokni International Conference on Algorithmic Learning Theory, 465-487, 2022 | 9 | 2022 |