关注
Dmitry Kovalev
Dmitry Kovalev
Yandex
在 kaust.edu.sa 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Stochastic distributed learning with gradient quantization and double-variance reduction
S Horváth, D Kovalev, K Mishchenko, P Richtárik, S Stich
Optimization Methods and Software 38 (1), 91-106, 2023
1792023
Don’t jump through hoops and remove those loops: SVRG and Katyusha are better without the outer loop
D Kovalev, S Horváth, P Richtárik
Algorithmic Learning Theory, 451-467, 2020
1702020
Acceleration for compressed gradient descent in distributed and federated optimization
Z Li, D Kovalev, X Qian, P Richtárik
arXiv preprint arXiv:2002.11364, 2020
1552020
From local SGD to local fixed-point methods for federated learning
G Malinovskiy, D Kovalev, E Gasanov, L Condat, P Richtarik
International Conference on Machine Learning, 6692-6701, 2020
1182020
RSN: randomized subspace Newton
R Gower, D Kovalev, F Lieder, P Richtárik
Advances in Neural Information Processing Systems 32, 2019
912019
Linearly converging error compensated SGD
E Gorbunov, D Kovalev, D Makarenko, P Richtárik
Advances in Neural Information Processing Systems 33, 20889-20900, 2020
822020
Revisiting stochastic extragradient
K Mishchenko, D Kovalev, E Shulgin, P Richtárik, Y Malitsky
International Conference on Artificial Intelligence and Statistics, 4573-4582, 2020
802020
Optimal and practical algorithms for smooth and strongly convex decentralized optimization
D Kovalev, A Salim, P Richtárik
Advances in Neural Information Processing Systems 33, 18342-18352, 2020
792020
A linearly convergent algorithm for decentralized optimization: Sending less bits for free!
D Kovalev, A Koloskova, M Jaggi, P Richtarik, S Stich
International Conference on Artificial Intelligence and Statistics, 4087-4095, 2021
742021
Stochastic Newton and cubic Newton methods with simple local linear-quadratic rates
D Kovalev, K Mishchenko, P Richtárik
arXiv preprint arXiv:1912.01597, 2019
482019
Decentralized distributed optimization for saddle point problems
A Rogozin, A Beznosikov, D Dvinskikh, D Kovalev, P Dvurechensky, ...
arXiv preprint arXiv:2102.07758, 2021
422021
Lower bounds and optimal algorithms for smooth and strongly convex decentralized optimization over time-varying networks
D Kovalev, E Gasanov, A Gasnikov, P Richtarik
Advances in Neural Information Processing Systems 34, 22325-22335, 2021
402021
Accelerated methods for saddle-point problem
MS Alkousa, AV Gasnikov, DM Dvinskikh, DA Kovalev, FS Stonyakin
Computational Mathematics and Mathematical Physics 60, 1787-1809, 2020
382020
Accelerated primal-dual gradient method for smooth and convex-concave saddle-point problems with bilinear coupling
D Kovalev, A Gasnikov, P Richtárik
Advances in Neural Information Processing Systems 35, 21725-21737, 2022
372022
ADOM: accelerated decentralized optimization method for time-varying networks
D Kovalev, E Shulgin, P Richtárik, AV Rogozin, A Gasnikov
International Conference on Machine Learning, 5784-5793, 2021
332021
Optimal algorithms for decentralized stochastic variational inequalities
D Kovalev, A Beznosikov, A Sadiev, M Persiianov, P Richtárik, ...
Advances in Neural Information Processing Systems 35, 31073-31088, 2022
312022
The first optimal acceleration of high-order methods in smooth convex optimization
D Kovalev, A Gasnikov
Advances in Neural Information Processing Systems 35, 35339-35351, 2022
302022
On accelerated methods for saddle-point problems with composite structure
V Tominin, Y Tominin, E Borodich, D Kovalev, A Gasnikov, ...
arXiv preprint arXiv:2103.09344, 2021
292021
Accelerated methods for composite non-bilinear saddle point problem
M Alkousa, D Dvinskikh, F Stonyakin, A Gasnikov, D Kovalev
arXiv preprint arXiv:1906.03620, 2019
292019
Towards accelerated rates for distributed optimization over time-varying networks
A Rogozin, V Lukoshkin, A Gasnikov, D Kovalev, E Shulgin
Optimization and Applications: 12th International Conference, OPTIMA 2021 …, 2021
282021
系统目前无法执行此操作,请稍后再试。
文章 1–20