Fast decorrelated neural network ensembles with random weights

M Alhamdoosh, D Wang - Information Sciences, 2014 - Elsevier
Information Sciences, 2014Elsevier
Negative correlation learning (NCL) aims to produce ensembles with sound generalization
capability through controlling the disagreement among base learners' outputs. Such a
learning scheme is usually implemented by using feed-forward neural networks with error
back-propagation algorithms (BPNNs). However, it suffers from slow convergence, local
minima problem and model uncertainties caused by the initial weights and the setting of
learning parameters. To achieve a better solution, this paper employs the random vector …
Abstract
Negative correlation learning (NCL) aims to produce ensembles with sound generalization capability through controlling the disagreement among base learners’ outputs. Such a learning scheme is usually implemented by using feed-forward neural networks with error back-propagation algorithms (BPNNs). However, it suffers from slow convergence, local minima problem and model uncertainties caused by the initial weights and the setting of learning parameters. To achieve a better solution, this paper employs the random vector functional link (RVFL) networks as base components, and incorporates with the NCL strategy for building neural network ensembles. The basis functions of the base models are generated randomly and the parameters of the RVFL networks can be determined by solving a linear equation system. An analytical solution is derived for these parameters, where a cost function defined for NCL and the well-known least squares method are used. To examine the merits of our proposed algorithm, a comparative study is carried out with nine benchmark datasets. Results indicate that our approach outperforms other ensembling techniques on the testing datasets in terms of both effectiveness and efficiency.
Elsevier
以上显示的是最相近的搜索结果。 查看全部搜索结果