Fast convergence to non-isolated minima: four equivalent conditions for functions
Optimization algorithms can see their local convergence rates deteriorate when the Hessian
at the optimum is singular. These singularities are inescapable when the optima are non …
at the optimum is singular. These singularities are inescapable when the optima are non …
Random coordinate descent: a simple alternative for optimizing parameterized quantum circuits
Variational quantum algorithms rely on the optimization of parameterized quantum circuits in
noisy settings. The commonly used back-propagation procedure in classical machine …
noisy settings. The commonly used back-propagation procedure in classical machine …
Convergence of a normal map-based prox-sgd method under the kl inequality
A Milzarek, J Qiu - arXiv preprint arXiv:2305.05828, 2023 - arxiv.org
In this paper, we present a novel stochastic normal map-based algorithm ($\mathsf
{norM}\text {-}\mathsf {SGD} $) for nonconvex composite-type optimization problems and …
{norM}\text {-}\mathsf {SGD} $) for nonconvex composite-type optimization problems and …
Reinforcement Learning and Variational Quantum Algorithms
J Yao - 2023 - escholarship.org
In recent years, the realms of deep learning and variational quantum algorithms have
undergone significant advancements. These innovative algorithms have proven to be …
undergone significant advancements. These innovative algorithms have proven to be …
Counterexamples for Noise Models of Stochastic Gradients
V Patel - Examples and Counterexamples, 2023 - Elsevier
Abstract Stochastic Gradient Descent (SGD) is a widely used, foundational algorithm in data
science and machine learning. As a result, analyses of SGD abound making use of a variety …
science and machine learning. As a result, analyses of SGD abound making use of a variety …