关注
Amr Hendy
标题
引用次数
引用次数
年份
How good are gpt models at machine translation? a comprehensive evaluation
A Hendy, M Abdelrehim, A Sharaf, V Raunak, M Gabr, H Matsushita, ...
arXiv preprint arXiv:2302.09210, 2023
2552023
Scalable and efficient moe training for multitask multilingual models
YJ Kim, AA Awan, A Muzio, AFC Salinas, L Lu, A Hendy, S Rajbhandari, ...
arXiv preprint arXiv:2109.10465, 2021
652021
How good are GPT models at machine translation? A comprehensive evaluation. arXiv
A Hendy, M Abdelrehim, A Sharaf, V Raunak, M Gabr, H Matsushita, ...
arXiv preprint arXiv:2302.09210, 2023
102023
& Awadalla, HH (2023). How good are GPT models at machine translation? a comprehensive evaluation
A Hendy, M Abdelrehim, A Sharaf, V Raunak, M Gabr, H Matsushita
arXiv preprint arXiv:2302.09210, 0
7
Domain specific sub-network for multi-domain neural machine translation
A Hendy, M Abdelghaffar, M Afify, AY Tawfik
arXiv preprint arXiv:2210.09805, 2022
62022
Score combination for improved parallel corpus filtering for low resource conditions
MN ElNokrashy, A Hendy, M Abdelghaffar, M Afify, A Tawfik, HH Awadalla
arXiv preprint arXiv:2011.07933, 2020
42020
Language tokens: A frustratingly simple approach improves zero-shot performance of multilingual translation
M ElNokrashy, A Hendy, M Maher, M Afify, HH Awadalla
arXiv preprint arXiv:2208.05852, 2022
32022
Ensembling of distilled models from multi-task teachers for constrained resource language pairs
A Hendy, EA Gad, M Abdelghaffar, JS ElMosalami, M Afify, AY Tawfik, ...
arXiv preprint arXiv:2111.13284, 2021
22021
Language Tokens: Simply Improving Zero-Shot Multi-Aligned Translation in Encoder-Decoder Models
MN ElNokrashy, A Hendy, M Maher, M Afify, HH Awadalla
Proceedings of the 15th biennial conference of the Association for Machine …, 2022
12022
系统目前无法执行此操作,请稍后再试。
文章 1–9