[HTML][HTML] Sparsity through evolutionary pruning prevents neuronal networks from overfitting

RC Gerum, A Erpenbeck, P Krauss, A Schilling - Neural Networks, 2020 - Elsevier
Modern Machine learning techniques take advantage of the exponentially rising calculation
power in new generation processor units. Thus, the number of parameters which are trained …

Learning sparse networks using targeted dropout

AN Gomez, I Zhang, SR Kamalakara, D Madaan… - arXiv preprint arXiv …, 2019 - arxiv.org
Neural networks are easier to optimise when they have many more weights than are
required for modelling the mapping from inputs to outputs. This suggests a two-stage …

Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks

T Hoefler, D Alistarh, T Ben-Nun, N Dryden… - Journal of Machine …, 2021 - jmlr.org
The growing energy and performance costs of deep learning have driven the community to
reduce the size of neural networks by selectively pruning components. Similarly to their …

Pruning neural networks: is it time to nip it in the bud?

EJ Crowley, J Turner, A Storkey, M O'Boyle - 2018 - openreview.net
Pruning is a popular technique for compressing a neural network: a large pre-trained
network is fine-tuned while connections are successively removed. However, the value of …

The lottery ticket hypothesis: Finding sparse, trainable neural networks

J Frankle, M Carbin - arXiv preprint arXiv:1803.03635, 2018 - arxiv.org
Neural network pruning techniques can reduce the parameter counts of trained networks by
over 90%, decreasing storage requirements and improving computational performance of …

Lassonet: Neural networks with feature sparsity

I Lemhadri, F Ruan… - … conference on artificial …, 2021 - proceedings.mlr.press
Much work has been done recently to make neural networks more interpretable, and one
approach is to arrange for the network to use only a subset of the available features. In linear …

Pruning artificial neural networks using neural complexity measures

TD Jorgensen, BP Haynes… - International Journal of …, 2008 - World Scientific
This paper describes a new method for pruning artificial neural networks, using a measure
of the neural complexity of the neural network. This measure is used to determine the …

Powerpropagation: A sparsity inducing weight reparameterisation

J Schwarz, S Jayakumar, R Pascanu… - Advances in neural …, 2021 - proceedings.neurips.cc
The training of sparse neural networks is becoming an increasingly important tool for
reducing the computational footprint of models at training and evaluation, as well enabling …

Frivolous units: Wider networks are not really that wide

S Casper, X Boix, V D'Amario, L Guo… - Proceedings of the …, 2021 - ojs.aaai.org
A remarkable characteristic of overparameterized deep neural networks (DNNs) is that their
accuracy does not degrade when the network width is increased. Recent evidence suggests …

[HTML][HTML] Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science

DC Mocanu, E Mocanu, P Stone, PH Nguyen… - Nature …, 2018 - nature.com
Through the success of deep learning in various domains, artificial neural networks are
currently among the most used artificial intelligence methods. Taking inspiration from the …