Unsupervised pretraining for sequence to sequence learning

P Ramachandran, PJ Liu, QV Le - arXiv preprint arXiv:1611.02683, 2016 - arxiv.org
arXiv preprint arXiv:1611.02683, 2016arxiv.org
This work presents a general unsupervised learning method to improve the accuracy of
sequence to sequence (seq2seq) models. In our method, the weights of the encoder and
decoder of a seq2seq model are initialized with the pretrained weights of two language
models and then fine-tuned with labeled data. We apply this method to challenging
benchmarks in machine translation and abstractive summarization and find that it
significantly improves the subsequent supervised models. Our main result is that pretraining …
This work presents a general unsupervised learning method to improve the accuracy of sequence to sequence (seq2seq) models. In our method, the weights of the encoder and decoder of a seq2seq model are initialized with the pretrained weights of two language models and then fine-tuned with labeled data. We apply this method to challenging benchmarks in machine translation and abstractive summarization and find that it significantly improves the subsequent supervised models. Our main result is that pretraining improves the generalization of seq2seq models. We achieve state-of-the art results on the WMT EnglishGerman task, surpassing a range of methods using both phrase-based machine translation and neural machine translation. Our method achieves a significant improvement of 1.3 BLEU from the previous best models on both WMT'14 and WMT'15 EnglishGerman. We also conduct human evaluations on abstractive summarization and find that our method outperforms a purely supervised learning baseline in a statistically significant manner.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果