A maximally split and relaxed ADMM for regularized extreme learning machines
IEEE Transactions on Neural Networks and Learning Systems, 2019•ieeexplore.ieee.org
One of the salient features of the extreme learning machine (ELM) is its fast learning speed.
However, in a big data environment, the ELM still suffers from an overly heavy computational
load due to the high dimensionality and the large amount of data. Using the alternating
direction method of multipliers (ADMM), a convex model fitting problem can be split into a set
of concurrently executable subproblems, each with just a subset of model coefficients. By
maximally splitting across the coefficients and incorporating a novel relaxation technique, a …
However, in a big data environment, the ELM still suffers from an overly heavy computational
load due to the high dimensionality and the large amount of data. Using the alternating
direction method of multipliers (ADMM), a convex model fitting problem can be split into a set
of concurrently executable subproblems, each with just a subset of model coefficients. By
maximally splitting across the coefficients and incorporating a novel relaxation technique, a …
One of the salient features of the extreme learning machine (ELM) is its fast learning speed. However, in a big data environment, the ELM still suffers from an overly heavy computational load due to the high dimensionality and the large amount of data. Using the alternating direction method of multipliers (ADMM), a convex model fitting problem can be split into a set of concurrently executable subproblems, each with just a subset of model coefficients. By maximally splitting across the coefficients and incorporating a novel relaxation technique, a maximally split and relaxed ADMM (MS-RADMM), along with a scalarwise implementation, is developed for the regularized ELM (RELM). The convergence conditions and the convergence rate of the MS-RADMM are established, which exhibits linear convergence with a smaller convergence ratio than the unrelaxed maximally split ADMM. The optimal parameter values of the MS-RADMM are obtained and a fast parameter selection scheme is provided. Experiments on ten benchmark classification data sets are conducted, the results of which demonstrate the fast convergence and parallelism of the MS-RADMM. Complexity comparisons with the matrix-inversion-based method in terms of the numbers of multiplication and addition operations, the computation time and the number of memory cells are provided for performance evaluation of the MS-RADMM.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果