Explaining recurrent neural network predictions in sentiment analysis

L Arras, G Montavon, KR Müller, W Samek - arXiv preprint arXiv …, 2017 - arxiv.org
arXiv preprint arXiv:1706.07206, 2017arxiv.org
Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown to
deliver insightful explanations in the form of input space relevances for understanding feed-
forward neural network classification decisions. In the present work, we extend the usage of
LRP to recurrent neural networks. We propose a specific propagation rule applicable to
multiplicative connections as they arise in recurrent network architectures such as LSTMs
and GRUs. We apply our technique to a word-based bi-directional LSTM model on a five …
Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown to deliver insightful explanations in the form of input space relevances for understanding feed-forward neural network classification decisions. In the present work, we extend the usage of LRP to recurrent neural networks. We propose a specific propagation rule applicable to multiplicative connections as they arise in recurrent network architectures such as LSTMs and GRUs. We apply our technique to a word-based bi-directional LSTM model on a five-class sentiment prediction task, and evaluate the resulting LRP relevances both qualitatively and quantitatively, obtaining better results than a gradient-based related method which was used in previous work.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果