Stochastic distributed learning with gradient quantization and double-variance reduction S Horváth, D Kovalev, K Mishchenko, P Richtárik, S Stich Optimization Methods and Software 38 (1), 91-106, 2023 | 179 | 2023 |
Don’t jump through hoops and remove those loops: SVRG and Katyusha are better without the outer loop D Kovalev, S Horváth, P Richtárik Algorithmic Learning Theory, 451-467, 2020 | 170 | 2020 |
Acceleration for compressed gradient descent in distributed and federated optimization Z Li, D Kovalev, X Qian, P Richtárik arXiv preprint arXiv:2002.11364, 2020 | 155 | 2020 |
From local SGD to local fixed-point methods for federated learning G Malinovskiy, D Kovalev, E Gasanov, L Condat, P Richtarik International Conference on Machine Learning, 6692-6701, 2020 | 118 | 2020 |
RSN: randomized subspace Newton R Gower, D Kovalev, F Lieder, P Richtárik Advances in Neural Information Processing Systems 32, 2019 | 91 | 2019 |
Linearly converging error compensated SGD E Gorbunov, D Kovalev, D Makarenko, P Richtárik Advances in Neural Information Processing Systems 33, 20889-20900, 2020 | 82 | 2020 |
Revisiting stochastic extragradient K Mishchenko, D Kovalev, E Shulgin, P Richtárik, Y Malitsky International Conference on Artificial Intelligence and Statistics, 4573-4582, 2020 | 80 | 2020 |
Optimal and practical algorithms for smooth and strongly convex decentralized optimization D Kovalev, A Salim, P Richtárik Advances in Neural Information Processing Systems 33, 18342-18352, 2020 | 79 | 2020 |
A linearly convergent algorithm for decentralized optimization: Sending less bits for free! D Kovalev, A Koloskova, M Jaggi, P Richtarik, S Stich International Conference on Artificial Intelligence and Statistics, 4087-4095, 2021 | 74 | 2021 |
Stochastic Newton and cubic Newton methods with simple local linear-quadratic rates D Kovalev, K Mishchenko, P Richtárik arXiv preprint arXiv:1912.01597, 2019 | 48 | 2019 |
Decentralized distributed optimization for saddle point problems A Rogozin, A Beznosikov, D Dvinskikh, D Kovalev, P Dvurechensky, ... arXiv preprint arXiv:2102.07758, 2021 | 42 | 2021 |
Lower bounds and optimal algorithms for smooth and strongly convex decentralized optimization over time-varying networks D Kovalev, E Gasanov, A Gasnikov, P Richtarik Advances in Neural Information Processing Systems 34, 22325-22335, 2021 | 40 | 2021 |
Accelerated methods for saddle-point problem MS Alkousa, AV Gasnikov, DM Dvinskikh, DA Kovalev, FS Stonyakin Computational Mathematics and Mathematical Physics 60, 1787-1809, 2020 | 38 | 2020 |
Accelerated primal-dual gradient method for smooth and convex-concave saddle-point problems with bilinear coupling D Kovalev, A Gasnikov, P Richtárik Advances in Neural Information Processing Systems 35, 21725-21737, 2022 | 37 | 2022 |
ADOM: accelerated decentralized optimization method for time-varying networks D Kovalev, E Shulgin, P Richtárik, AV Rogozin, A Gasnikov International Conference on Machine Learning, 5784-5793, 2021 | 33 | 2021 |
Optimal algorithms for decentralized stochastic variational inequalities D Kovalev, A Beznosikov, A Sadiev, M Persiianov, P Richtárik, ... Advances in Neural Information Processing Systems 35, 31073-31088, 2022 | 31 | 2022 |
The first optimal acceleration of high-order methods in smooth convex optimization D Kovalev, A Gasnikov Advances in Neural Information Processing Systems 35, 35339-35351, 2022 | 30 | 2022 |
On accelerated methods for saddle-point problems with composite structure V Tominin, Y Tominin, E Borodich, D Kovalev, A Gasnikov, ... arXiv preprint arXiv:2103.09344, 2021 | 29 | 2021 |
Accelerated methods for composite non-bilinear saddle point problem M Alkousa, D Dvinskikh, F Stonyakin, A Gasnikov, D Kovalev arXiv preprint arXiv:1906.03620, 2019 | 29 | 2019 |
Towards accelerated rates for distributed optimization over time-varying networks A Rogozin, V Lukoshkin, A Gasnikov, D Kovalev, E Shulgin Optimization and Applications: 12th International Conference, OPTIMA 2021 …, 2021 | 28 | 2021 |