Structured in space, randomized in time: leveraging dropout in RNNs for efficient training

A Sarma, S Singh, H Jiang, R Zhang… - Advances in …, 2021 - proceedings.neurips.cc
Advances in Neural Information Processing Systems, 2021proceedings.neurips.cc
Abstract Recurrent Neural Networks (RNNs), more specifically their Long Short-Term
Memory (LSTM) variants, have been widely used as a deep learning tool for tackling
sequence-based learning tasks in text and speech. Training of such LSTM applications is
computationally intensive due to the recurrent nature of hidden state computation that
repeats for each time step. While sparsity in Deep Neural Nets has been widely seen as an
opportunity for reducing computation time in both training and inference phases, the usage …
Abstract
Recurrent Neural Networks (RNNs), more specifically their Long Short-Term Memory (LSTM) variants, have been widely used as a deep learning tool for tackling sequence-based learning tasks in text and speech. Training of such LSTM applications is computationally intensive due to the recurrent nature of hidden state computation that repeats for each time step. While sparsity in Deep Neural Nets has been widely seen as an opportunity for reducing computation time in both training and inference phases, the usage of non-ReLU activation in LSTM RNNs renders the opportunities for such dynamic sparsity associated with neuron activation and gradient values to be limited or non-existent. In this work, we identify dropout induced sparsity for LSTMs as a suitable mode of computation reduction. Dropout is a widely used regularization mechanism, which randomly drops computed neuron values during each iteration of training. We propose to structure dropout patterns, by dropping out the same set of physical neurons within a batch, resulting in column (row) level hidden state sparsity, which are well amenable to computation reduction at run-time in general-purpose SIMD hardware as well as systolic arrays. We provide a detailed analysis of how the dropout-induced sparsity propagates through the different stages of network training and how it can be leveraged in each stage. More importantly, our proposed approach works as a direct replacement for existing dropout-based application settings. We conduct our experiments for three representative NLP tasks: language modelling on the PTB dataset, OpenNMT based machine translation using the IWSLT De-En and En-Vi datasets, and named entity recognition sequence labelling using the CoNLL-2003 shared task. We demonstrate that our proposed approach can be used to translate dropout-based computation reduction into reduced training time, with improvement ranging from 1.23 to 1.64, without sacrificing the target metric.
proceedings.neurips.cc
以上显示的是最相近的搜索结果。 查看全部搜索结果