Learning to remember translation history with a continuous cache
Transactions of the Association for Computational Linguistics, 2018•direct.mit.edu
Existing neural machine translation (NMT) models generally translate sentences in isolation,
missing the opportunity to take advantage of document-level information. In this work, we
propose to augment NMT models with a very light-weight cache-like memory network, which
stores recent hidden representations as translation history. The probability distribution over
generated words is updated online depending on the translation history retrieved from the
memory, endowing NMT models with the capability to dynamically adapt over time …
missing the opportunity to take advantage of document-level information. In this work, we
propose to augment NMT models with a very light-weight cache-like memory network, which
stores recent hidden representations as translation history. The probability distribution over
generated words is updated online depending on the translation history retrieved from the
memory, endowing NMT models with the capability to dynamically adapt over time …
Abstract
Existing neural machine translation (NMT) models generally translate sentences in isolation, missing the opportunity to take advantage of document-level information. In this work, we propose to augment NMT models with a very light-weight cache-like memory network, which stores recent hidden representations as translation history. The probability distribution over generated words is updated online depending on the translation history retrieved from the memory, endowing NMT models with the capability to dynamically adapt over time. Experiments on multiple domains with different topics and styles show the effectiveness of the proposed approach with negligible impact on the computational cost.
MIT Press
以上显示的是最相近的搜索结果。 查看全部搜索结果