A Review of multilayer extreme learning machine neural networks

JA Vásquez-Coronel, M Mora, K Vilches - Artificial Intelligence Review, 2023 - Springer
Abstract The Extreme Learning Machine is a single-hidden-layer feedforward learning
algorithm, which has been successfully applied in regression and classification problems in …

Understanding the acceleration phenomenon via high-resolution differential equations

B Shi, SS Du, MI Jordan, WJ Su - Mathematical Programming, 2022 - Springer
Gradient-based optimization algorithms can be studied from the perspective of limiting
ordinary differential equations (ODEs). Motivated by the fact that existing ODEs do not …

Fast optimization via inertial dynamics with closed-loop damping

H Attouch, RI Boţ, ER Csetnek - Journal of the European Mathematical …, 2022 - ems.press
In a real Hilbert space H, in order to develop fast optimization methods, we analyze the
asymptotic behavior, as time t tends to infinity, of a large class of autonomous dissipative …

First-order optimization algorithms via inertial systems with Hessian driven damping

H Attouch, Z Chbani, J Fadili, H Riahi - Mathematical Programming, 2022 - Springer
In a Hilbert space setting, for convex optimization, we analyze the convergence rate of a
class of first-order algorithms involving inertial features. They can be interpreted as discrete …

Almost sure convergence rates for stochastic gradient descent and stochastic heavy ball

O Sebbouh, RM Gower… - Conference on Learning …, 2021 - proceedings.mlr.press
We study stochastic gradient descent (SGD) and the stochastic heavy ball method (SHB,
otherwise known as the momentum method) for the general stochastic approximation …

A Lyapunov analysis of accelerated methods in optimization

AC Wilson, B Recht, MI Jordan - Journal of Machine Learning Research, 2021 - jmlr.org
Accelerated optimization methods, such as Nesterov's accelerated gradient method, play a
significant role in optimization. Several accelerated methods are provably optimal under …

Rate of convergence of the Nesterov accelerated gradient method in the subcritical case α≤ 3

H Attouch, Z Chbani, H Riahi - ESAIM: Control, Optimisation and …, 2019 - esaim-cocv.org
In a Hilbert space setting ℋ, given Φ: ℋ→ ℝ a convex continuously differentiable function,
and α a positive parameter, we consider the inertial dynamic system with Asymptotic …

Direct Runge-Kutta discretization achieves acceleration

J Zhang, A Mokhtari, S Sra… - Advances in neural …, 2018 - proceedings.neurips.cc
We study gradient-based optimization methods obtained by directly discretizing a second-
order ordinary differential equation (ODE) related to the continuous limit of Nesterov's …

Forward-backward envelope for the sum of two nonconvex functions: Further properties and nonmonotone linesearch algorithms

A Themelis, L Stella, P Patrinos - SIAM Journal on Optimization, 2018 - SIAM
We propose\sf ZeroFPR, a nonmonotone linesearch algorithm for minimizing the sum of two
nonconvex functions, one of which is smooth and the other possibly nonsmooth.\sf ZeroFPR …

Strong convergence of inertial forward–backward methods for solving monotone inclusions

B Tan, SY Cho - Applicable Analysis, 2022 - Taylor & Francis
The paper presents four modifications of the inertial forward–backward splitting method for
monotone inclusion problems in the framework of real Hilbert spaces. The advantages of our …