A Review of multilayer extreme learning machine neural networks
Abstract The Extreme Learning Machine is a single-hidden-layer feedforward learning
algorithm, which has been successfully applied in regression and classification problems in …
algorithm, which has been successfully applied in regression and classification problems in …
A unified algorithmic framework for block-structured optimization involving big data: With applications in machine learning and signal processing
This article presents a powerful algorithmic framework for big data optimization, called the
block successive upper-bound minimization (BSUM). The BSUM includes as special cases …
block successive upper-bound minimization (BSUM). The BSUM includes as special cases …
Parallel Multi-Block ADMM with o(1 / k) Convergence
This paper introduces a parallel and distributed algorithm for solving the following
minimization problem with linear constraints: minimize~~ &f_1 (x _1)+ ⋯+ f_N (x _N)\subject …
minimization problem with linear constraints: minimize~~ &f_1 (x _1)+ ⋯+ f_N (x _N)\subject …
Parallel coordinate descent methods for big data optimization
P Richtárik, M Takáč - Mathematical Programming, 2016 - Springer
In this work we show that randomized (block) coordinate descent methods can be
accelerated by parallelization when applied to the problem of minimizing the sum of a …
accelerated by parallelization when applied to the problem of minimizing the sum of a …
An asynchronous parallel stochastic coordinate descent algorithm
We describe an asynchronous parallel stochastic coordinate descent algorithm for
minimizing smooth unconstrained or separably constrained functions. The method achieves …
minimizing smooth unconstrained or separably constrained functions. The method achieves …
Arock: an algorithmic framework for asynchronous parallel coordinate updates
Finding a fixed point to a nonexpansive operator, ie, x^*=Tx^*, abstracts many problems in
numerical linear algebra, optimization, and other areas of data science. To solve fixed-point …
numerical linear algebra, optimization, and other areas of data science. To solve fixed-point …
Successive convex approximation: Analysis and applications
M Razaviyayn - 2014 - search.proquest.com
The block coordinate descent (BCD) method is widely used for minimizing a continuous
function f of several block variables. At each iteration of this method, a single block of …
function f of several block variables. At each iteration of this method, a single block of …
Asynchronous stochastic coordinate descent: Parallelism and convergence properties
We describe an asynchronous parallel stochastic proximal coordinate descent algorithm for
minimizing a composite objective function, which consists of a smooth convex function …
minimizing a composite objective function, which consists of a smooth convex function …
Parallel selective algorithms for nonconvex big data optimization
F Facchinei, G Scutari… - IEEE Transactions on …, 2015 - ieeexplore.ieee.org
We propose a decomposition framework for the parallel optimization of the sum of a
differentiable (possibly nonconvex) function and a (block) separable nonsmooth, convex …
differentiable (possibly nonconvex) function and a (block) separable nonsmooth, convex …
Block stochastic gradient iteration for convex and nonconvex optimization
The stochastic gradient (SG) method can quickly solve a problem with a large number of
components in the objective, or a stochastic optimization problem, to a moderate accuracy …
components in the objective, or a stochastic optimization problem, to a moderate accuracy …