Simple recurrence improves masked language models
arXiv preprint arXiv:2205.11588, 2022•arxiv.org
In this work, we explore whether modeling recurrence into the Transformer architecture can
both be beneficial and efficient, by building an extremely simple recurrent module into the
Transformer. We compare our model to baselines following the training and evaluation
recipe of BERT. Our results confirm that recurrence can indeed improve Transformer models
by a consistent margin, without requiring low-level performance optimizations, and while
keeping the number of parameters constant. For example, our base model achieves an …
both be beneficial and efficient, by building an extremely simple recurrent module into the
Transformer. We compare our model to baselines following the training and evaluation
recipe of BERT. Our results confirm that recurrence can indeed improve Transformer models
by a consistent margin, without requiring low-level performance optimizations, and while
keeping the number of parameters constant. For example, our base model achieves an …
In this work, we explore whether modeling recurrence into the Transformer architecture can both be beneficial and efficient, by building an extremely simple recurrent module into the Transformer. We compare our model to baselines following the training and evaluation recipe of BERT. Our results confirm that recurrence can indeed improve Transformer models by a consistent margin, without requiring low-level performance optimizations, and while keeping the number of parameters constant. For example, our base model achieves an absolute improvement of 2.1 points averaged across 10 tasks and also demonstrates increased stability in fine-tuning over a range of learning rates.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果