A Review of multilayer extreme learning machine neural networks
Abstract The Extreme Learning Machine is a single-hidden-layer feedforward learning
algorithm, which has been successfully applied in regression and classification problems in …
algorithm, which has been successfully applied in regression and classification problems in …
Understanding the acceleration phenomenon via high-resolution differential equations
Gradient-based optimization algorithms can be studied from the perspective of limiting
ordinary differential equations (ODEs). Motivated by the fact that existing ODEs do not …
ordinary differential equations (ODEs). Motivated by the fact that existing ODEs do not …
Fast optimization via inertial dynamics with closed-loop damping
In a real Hilbert space H, in order to develop fast optimization methods, we analyze the
asymptotic behavior, as time t tends to infinity, of a large class of autonomous dissipative …
asymptotic behavior, as time t tends to infinity, of a large class of autonomous dissipative …
First-order optimization algorithms via inertial systems with Hessian driven damping
In a Hilbert space setting, for convex optimization, we analyze the convergence rate of a
class of first-order algorithms involving inertial features. They can be interpreted as discrete …
class of first-order algorithms involving inertial features. They can be interpreted as discrete …
Almost sure convergence rates for stochastic gradient descent and stochastic heavy ball
We study stochastic gradient descent (SGD) and the stochastic heavy ball method (SHB,
otherwise known as the momentum method) for the general stochastic approximation …
otherwise known as the momentum method) for the general stochastic approximation …
A Lyapunov analysis of accelerated methods in optimization
Accelerated optimization methods, such as Nesterov's accelerated gradient method, play a
significant role in optimization. Several accelerated methods are provably optimal under …
significant role in optimization. Several accelerated methods are provably optimal under …
Rate of convergence of the Nesterov accelerated gradient method in the subcritical case α≤ 3
In a Hilbert space setting ℋ, given Φ: ℋ→ ℝ a convex continuously differentiable function,
and α a positive parameter, we consider the inertial dynamic system with Asymptotic …
and α a positive parameter, we consider the inertial dynamic system with Asymptotic …
Direct Runge-Kutta discretization achieves acceleration
We study gradient-based optimization methods obtained by directly discretizing a second-
order ordinary differential equation (ODE) related to the continuous limit of Nesterov's …
order ordinary differential equation (ODE) related to the continuous limit of Nesterov's …
Forward-backward envelope for the sum of two nonconvex functions: Further properties and nonmonotone linesearch algorithms
We propose\sf ZeroFPR, a nonmonotone linesearch algorithm for minimizing the sum of two
nonconvex functions, one of which is smooth and the other possibly nonsmooth.\sf ZeroFPR …
nonconvex functions, one of which is smooth and the other possibly nonsmooth.\sf ZeroFPR …
Strong convergence of inertial forward–backward methods for solving monotone inclusions
B Tan, SY Cho - Applicable Analysis, 2022 - Taylor & Francis
The paper presents four modifications of the inertial forward–backward splitting method for
monotone inclusion problems in the framework of real Hilbert spaces. The advantages of our …
monotone inclusion problems in the framework of real Hilbert spaces. The advantages of our …