How good are gpt models at machine translation? a comprehensive evaluation A Hendy, M Abdelrehim, A Sharaf, V Raunak, M Gabr, H Matsushita, ... arXiv preprint arXiv:2302.09210, 2023 | 255 | 2023 |
Scalable and efficient moe training for multitask multilingual models YJ Kim, AA Awan, A Muzio, AFC Salinas, L Lu, A Hendy, S Rajbhandari, ... arXiv preprint arXiv:2109.10465, 2021 | 65 | 2021 |
How good are GPT models at machine translation? A comprehensive evaluation. arXiv A Hendy, M Abdelrehim, A Sharaf, V Raunak, M Gabr, H Matsushita, ... arXiv preprint arXiv:2302.09210, 2023 | 10 | 2023 |
& Awadalla, HH (2023). How good are GPT models at machine translation? a comprehensive evaluation A Hendy, M Abdelrehim, A Sharaf, V Raunak, M Gabr, H Matsushita arXiv preprint arXiv:2302.09210, 0 | 7 | |
Domain specific sub-network for multi-domain neural machine translation A Hendy, M Abdelghaffar, M Afify, AY Tawfik arXiv preprint arXiv:2210.09805, 2022 | 6 | 2022 |
Score combination for improved parallel corpus filtering for low resource conditions MN ElNokrashy, A Hendy, M Abdelghaffar, M Afify, A Tawfik, HH Awadalla arXiv preprint arXiv:2011.07933, 2020 | 4 | 2020 |
Language tokens: A frustratingly simple approach improves zero-shot performance of multilingual translation M ElNokrashy, A Hendy, M Maher, M Afify, HH Awadalla arXiv preprint arXiv:2208.05852, 2022 | 3 | 2022 |
Ensembling of distilled models from multi-task teachers for constrained resource language pairs A Hendy, EA Gad, M Abdelghaffar, JS ElMosalami, M Afify, AY Tawfik, ... arXiv preprint arXiv:2111.13284, 2021 | 2 | 2021 |
Language Tokens: Simply Improving Zero-Shot Multi-Aligned Translation in Encoder-Decoder Models MN ElNokrashy, A Hendy, M Maher, M Afify, HH Awadalla Proceedings of the 15th biennial conference of the Association for Machine …, 2022 | 1 | 2022 |