Parallel multistream training of high-dimensional neural network potentials
A Singraber, T Morawietz, J Behler… - Journal of chemical …, 2019 - ACS Publications
Over the past years high-dimensional neural network potentials (HDNNPs), fitted to
accurately reproduce ab initio potential energy surfaces, have become a powerful tool in …
accurately reproduce ab initio potential energy surfaces, have become a powerful tool in …
[PDF][PDF] Introducing currennt: The munich open-source cuda recurrent neural network toolkit
F Weninger, J Bergmann, B Schuller - Journal of Machine Learning …, 2015 - jmlr.org
In this article, we introduce CURRENNT, an open-source parallel implementation of deep
recurrent neural networks (RNNs) supporting graphics processing units (GPUs) through …
recurrent neural networks (RNNs) supporting graphics processing units (GPUs) through …
Parallel implementation of artificial neural network training for speech recognition
In this paper we describe the implementation of a complete ANN training procedure using
the block mode back-propagation learning algorithm for sequential patterns–such as the …
the block mode back-propagation learning algorithm for sequential patterns–such as the …
[PDF][PDF] Flexible high-dimensional classification machines and their asymptotic properties
Classification is an important topic in statistics and machine learning with great potential in
many real applications. In this paper, we investigate two popular large-margin classification …
many real applications. In this paper, we investigate two popular large-margin classification …
[图书][B] Machine Learning for Adaptive Many-Core Machines-A Practical Approach
Today the increasing complexity, performance requirements and cost of current (and future)
applications in society is transversal to a wide range of activities, from science to business …
applications in society is transversal to a wide range of activities, from science to business …
Efficient parallelization of batch pattern training algorithm on many-core and cluster architectures
The experimental research of the parallel batch pattern back propagation training algorithm
on the example of recirculation neural network on many-core high performance computing …
on the example of recirculation neural network on many-core high performance computing …
DNN training acceleration via exploring GPGPU friendly sparsity
The training phases of Deep neural network~(DNN) consumes enormous processing time
and energy. Compression techniques utilizing the sparsity of DNNs can effectively …
and energy. Compression techniques utilizing the sparsity of DNNs can effectively …
Parallel batch pattern BP training algorithm of recurrent neural network
V Turchenko, L Grandinetti - 2010 IEEE 14th International …, 2010 - ieeexplore.ieee.org
The development of parallel algorithm for batch pattern training of a recurrent neural network
with the back propagation training algorithm and the research of its efficiency on general …
with the back propagation training algorithm and the research of its efficiency on general …
Parallel batch pattern training algorithm for deep neural network
V Turchenko, V Golovko - 2014 International Conference on …, 2014 - ieeexplore.ieee.org
The development of parallel batch pattern training algorithm for deep multilayered neural
network architecture and its parallelization efficiency research on many-core system are …
network architecture and its parallelization efficiency research on many-core system are …
Parallel batch pattern training of neural networks on computational clusters
V Turchenko, L Grandinetti… - … Conference on High …, 2012 - ieeexplore.ieee.org
The research of a parallelization efficiency of a batch pattern training algorithm of a
multilayer perceptron on computational clusters is presented in this paper. The multilayer …
multilayer perceptron on computational clusters is presented in this paper. The multilayer …