Bilevel optimization: Convergence analysis and enhanced design K Ji, J Yang, Y Liang International conference on machine learning, 4882-4892, 2021 | 223* | 2021 |
Spiderboost and momentum: Faster variance reduction algorithms Z Wang, K Ji, Y Zhou, Y Liang, V Tarokh Advances in Neural Information Processing Systems 32, 2019 | 178 | 2019 |
Provably faster algorithms for bilevel optimization J Yang, K Ji, Y Liang Advances in Neural Information Processing Systems 34, 13670-13682, 2021 | 115 | 2021 |
Spiderboost: A class of faster variance-reduced algorithms for nonconvex optimization Z Wang, K Ji, Y Zhou, Y Liang, V Tarokh arXiv 2018, 2018 | 81 | 2018 |
Theoretical Convergence of Multi-Step Model-Agnostic Meta-Learning. K Ji, J Yang, Y Liang Journal of Machine Learning Research 23, 29:1-29:41, 2022 | 79* | 2022 |
Convergence of meta-learning with task-specific adaptation over partial parameters K Ji, JD Lee, Y Liang, HV Poor Advances in Neural Information Processing Systems 33, 11490-11500, 2020 | 69 | 2020 |
Improved zeroth-order variance reduced algorithms and analysis for nonconvex optimization K Ji, Z Wang, Y Zhou, Y Liang International conference on machine learning, 3100-3109, 2019 | 67 | 2019 |
Lower Bounds and Accelerated Algorithms for Bilevel Optimization K Ji, Y Liang Journal of Machine Learning Research (JMLR) 24, 22:1-22:56, 2023 | 53* | 2023 |
A new one-point residual-feedback oracle for black-box learning and control Y Zhang, Y Zhou, K Ji, MM Zavlanos Automatica 136, 110006, 2022 | 46* | 2022 |
A primal-dual approach to bilevel optimization with multiple inner minima D Sow, K Ji, Z Guan, Y Liang arXiv preprint arXiv:2203.01123, 2022 | 45 | 2022 |
Will bilevel optimizers benefit from loops K Ji, M Liu, Y Liang, L Ying Advances in Neural Information Processing Systems 35, 3011-3023, 2022 | 29 | 2022 |
Robust stochastic bandit algorithms under probabilistic unbounded adversarial attack Z Guan, K Ji, DJ Bucci Jr, TY Hu, J Palombo, M Liston, Y Liang Proceedings of the aaai conference on artificial intelligence 34 (04), 4036-4043, 2020 | 29 | 2020 |
When will gradient methods converge to max‐margin classifier under ReLU models? T Xu, Y Zhou, K Ji, Y Liang Stat 10 (1), e354, 2021 | 23* | 2021 |
On resource pooling and separation for lru caching J Tan, G Quan, K Ji, N Shroff SIGMETRICS 2018 2 (1), 5, 2018 | 23 | 2018 |
History-gradient aided batch size adaptation for variance reduced algorithms K Ji, Z Wang, B Weng, Y Zhou, W Zhang, Y Liang International Conference on Machine Learning, 4762-4772, 2020 | 20* | 2020 |
On the convergence theory for hessian-free bilevel algorithms D Sow, K Ji, Y Liang Advances in Neural Information Processing Systems 35, 4136-4149, 2022 | 19* | 2022 |
Efficiently escaping saddle points in bilevel optimization M Huang, X Chen, K Ji, S Ma, L Lai arXiv preprint arXiv:2202.03684, 2022 | 19 | 2022 |
Understanding estimation and generalization error of generative adversarial networks K Ji, Y Zhou, Y Liang IEEE Transactions on Information Theory 67 (5), 3114-3129, 2021 | 16 | 2021 |
Asymptotic miss ratio of LRU caching with consistent hashing K Ji, G Quan, J Tan IEEE INFOCOM 2018-IEEE Conference on Computer Communications, 450-458, 2018 | 14 | 2018 |
Momentum schemes with stochastic variance reduction for nonconvex composite optimization Y Zhou, Z Wang, K Ji, Y Liang, V Tarokh arXiv preprint arXiv:1902.02715, 2019 | 13 | 2019 |