Global convergence of sub-gradient method for robust matrix recovery: Small initialization, noisy measurements, and over-parameterization J Ma, S Fattahi Journal of Machine Learning Research, 2022 | 31 | 2022 |
Sign-RIP: A robust restricted isometry property for low-rank matrix recovery J Ma, S Fattahi 2021 NeurIPS Workshop on Optimization for Machine Learning, 2021 | 18* | 2021 |
Blessing of Depth in Linear Regression: Deeper Models Have Flatter Landscape Around the True Solution J Ma, S Fattahi Advances in Neural Information Processing Systems, 2022 | 12* | 2022 |
Behind the Scenes of Gradient Descent: A Trajectory Analysis via Basis Function Decomposition J Ma, L Guo, S Fattahi ICLR 2023, 2022 | 8 | 2022 |
Can Learning Be Explained By Local Optimality In Low-rank Matrix Recovery? J Ma, S Fattahi arXiv preprint arXiv:2302.10963, 2023 | 4* | 2023 |
Towards Understanding Generalization via Decomposing Excess Risk Dynamics J Teng, J Ma, Y Yuan ICLR 2022, 2021 | 4 | 2021 |
Convergence of Gradient Descent with Small Initialization for Unregularized Matrix Completion J Ma, S Fattahi arXiv preprint arXiv:2402.06756, 2024 | 1 | 2024 |
Robust Sparse Mean Estimation via Incremental Learning J Ma, RR Chen, Y He, S Fattahi, W Hu arXiv preprint arXiv:2305.15276, 2023 | | 2023 |