Towards a theory of non-commutative optimization: Geodesic 1st and 2nd order methods for moment maps and polytopes

P Bürgisser, C Franks, A Garg… - 2019 IEEE 60th …, 2019 - ieeexplore.ieee.org
This paper initiates a systematic development of a theory of non-commutative optimization, a
setting which greatly extends ordinary (Euclidean) convex optimization. It aims to unify and …

[图书][B] Metric algebraic geometry

P Breiding, K Kohn, B Sturmfels - 2024 - library.oapen.org
Metric algebraic geometry combines concepts from algebraic geometry and differential
geometry. Building on classical foundations, it offers practical tools for the 21st century …

Maximum likelihood estimation for matrix normal models via quiver representations

H Derksen, V Makam - SIAM Journal on Applied Algebra and Geometry, 2021 - SIAM
We study the log-likelihood function and maximum likelihood estimate (MLE) for the matrix
normal model for both real and complex models. We describe the exact number of samples …

The minimal canonical form of a tensor network

A Acuaviva, V Makam, H Nieuwboer… - 2023 IEEE 64th …, 2023 - ieeexplore.ieee.org
Tensor networks have a gauge degree of freedom on the virtual degrees of freedom that are
contracted. A canonical form is a choice of fixing this degree of freedom. For matrix product …

Negative curvature obstructs acceleration for strongly geodesically convex optimization, even with exact first-order oracles

C Criscitiello, N Boumal - Conference on Learning Theory, 2022 - proceedings.mlr.press
Hamilton and Moitra (2021) showed that, in certain regimes, it is not possible to accelerate
Riemannian gradient descent in the hyperbolic plane if we restrict ourselves to algorithms …

Maximum likelihood estimation for tensor normal models via castling transforms

H Derksen, V Makam, M Walter - Forum of Mathematics, Sigma, 2022 - cambridge.org
In this paper, we study sample size thresholds for maximum likelihood estimation for tensor
normal models. Given the model parameters and the number of samples, we determine …

No-go theorem for acceleration in the hyperbolic plane

L Hamilton, A Moitra - arXiv preprint arXiv:2101.05657, 2021 - arxiv.org
In recent years there has been significant effort to adapt the key tools and ideas in convex
optimization to the Riemannian setting. One key challenge has remained: Is there a …

Interior-point methods on manifolds: theory and applications

H Hirai, H Nieuwboer, M Walter - 2023 IEEE 64th Annual …, 2023 - ieeexplore.ieee.org
Interior-point methods offer a highly versatile framework for convex optimization that is
effective in theory and practice. A key notion in their theory is that of a self-concordant …

Negative curvature obstructs acceleration for strongly geodesically convex optimization, even with exact first-order oracles

C Criscitiello, N Boumal - arXiv preprint arXiv:2111.13263, 2021 - arxiv.org
Hamilton and Moitra (2021) showed that, in certain regimes, it is not possible to accelerate
Riemannian gradient descent in the hyperbolic plane if we restrict ourselves to algorithms …

Near optimal sample complexity for matrix and tensor normal models via geodesic convexity

C Franks, R Oliveira, A Ramachandran… - arXiv preprint arXiv …, 2021 - arxiv.org
The matrix normal model, the family of Gaussian matrix-variate distributions whose
covariance matrix is the Kronecker product of two lower dimensional factors, is frequently …