Convolutional proximal neural networks and plug-and-play algorithms
In this paper, we introduce convolutional proximal neural networks (cPNNs), which are by
construction averaged operators. For filters with full length, we propose a stochastic gradient …
construction averaged operators. For filters with full length, we propose a stochastic gradient …
A stochastic proximal alternating minimization for nonsmooth and nonconvex optimization
In this work, we introduce a novel stochastic proximal alternating linearized minimization
algorithm [J. Bolte, S. Sabach, and M. Teboulle, Math. Program., 146 (2014), pp. 459--494] …
algorithm [J. Bolte, S. Sabach, and M. Teboulle, Math. Program., 146 (2014), pp. 459--494] …
PCA reduced Gaussian mixture models with applications in superresolution
Despite the rapid development of computational hardware, the treatment of large and high
dimensional data sets is still a challenging problem. This paper provides a twofold …
dimensional data sets is still a challenging problem. This paper provides a twofold …
A variational EM acceleration for efficient clustering at very large scales
F Hirschberger, D Forster… - IEEE transactions on …, 2021 - ieeexplore.ieee.org
How can we efficiently find very large numbers of clusters C in very large datasets N of
potentially high dimensionality D? Here we address the question by using a novel …
potentially high dimensionality D? Here we address the question by using a novel …
Inertial accelerated SGD algorithms for solving large-scale lower-rank tensor CP decomposition problems
The stochastic gradient descent (SGD) method has been applied to the tensor
CANDECOMP/PARAFAC (CP) decomposition problem to reduce the computational cost …
CANDECOMP/PARAFAC (CP) decomposition problem to reduce the computational cost …
A stochastic two-step inertial Bregman proximal alternating linearized minimization algorithm for nonconvex and nonsmooth problems
C Guo, J Zhao, QL Dong - Numerical Algorithms, 2024 - Springer
In this paper, for solving a broad class of large-scale nonconvex and nonsmooth
optimization problems, we propose a stochastic two-step inertial Bregman proximal …
optimization problems, we propose a stochastic two-step inertial Bregman proximal …
SPRING: A fast stochastic proximal alternating method for non-smooth non-convex optimization
We introduce SPRING, a novel stochastic proximal alternating linearized minimization
algorithm for solving a class of non-smooth and non-convex optimization problems. Large …
algorithm for solving a class of non-smooth and non-convex optimization problems. Large …
Proximal residual flows for bayesian inverse problems
J Hertrich - International Conference on Scale Space and …, 2023 - Springer
Normalizing flows are a powerful tool for generative modelling, density estimation and
posterior reconstruction in Bayesian inverse problems. In this paper, we introduce proximal …
posterior reconstruction in Bayesian inverse problems. In this paper, we introduce proximal …
Sampling-based methods for multi-block optimization problems over transport polytopes
This paper focuses on multi-block optimization problems over transport polytopes, which
underlie various applications including strongly correlated quantum physics and machine …
underlie various applications including strongly correlated quantum physics and machine …
Inertial Accelerated Stochastic Mirror Descent for Large-Scale Generalized Tensor CP Decomposition
The majority of classic tensor CP decomposition models are designed for squared loss,
employing Euclidean distance as a local proximal term. However, the Euclidean distance is …
employing Euclidean distance as a local proximal term. However, the Euclidean distance is …