Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension M Lewis, Y Liu, N Goyal, M Ghazvininejad, A Mohamed, O Levy, ... arXiv preprint arXiv:1910.13461, 2019 | 10068 | 2019 |
Multilingual denoising pre-training for neural machine translation Y Liu, J Gu, N Goyal, X Li, S Edunov, M Ghazvininejad, M Lewis, ... Transactions of the Association for Computational Linguistics 8, 726-742, 2020 | 1664 | 2020 |
A knowledge-grounded neural conversation model M Ghazvininejad, C Brockett, MW Chang, B Dolan, J Gao, W Yih, ... Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018 | 645 | 2018 |
Mask-predict: Parallel decoding of conditional masked language models M Ghazvininejad, O Levy, Y Liu, L Zettlemoyer arXiv preprint arXiv:1904.09324, 2019 | 553 | 2019 |
Generating Topical Poetry M Ghazvininejad, X Shi, Y Choi, K Knight Empirical Methods on Natural Language Processing, 2016 | 193 | 2016 |
Hafez: an Interactive Poetry Generation System M Ghazvininejad, X Shi, J Priyadarshi, K Knight proceeding of ACL Demo Track, 2017 | 192 | 2017 |
Towards controllable story generation N Peng, M Ghazvininejad, J May, K Knight Proceedings of the First Workshop on Storytelling, 43-49, 2018 | 172 | 2018 |
Detecting hallucinated content in conditional neural sequence generation C Zhou, G Neubig, J Gu, M Diab, P Guzman, L Zettlemoyer, ... arXiv preprint arXiv:2011.02593, 2020 | 165 | 2020 |
Pre-training via paraphrasing M Lewis, M Ghazvininejad, G Ghosh, A Aghajanyan, S Wang, ... Advances in Neural Information Processing Systems 33, 18470-18481, 2020 | 155 | 2020 |
In-context examples selection for machine translation S Agrawal, C Zhou, M Lewis, L Zettlemoyer, M Ghazvininejad arXiv preprint arXiv:2212.02437, 2022 | 137 | 2022 |
Delight: Deep and light-weight transformer S Mehta, M Ghazvininejad, S Iyer, L Zettlemoyer, H Hajishirzi arXiv preprint arXiv:2008.00623, 2020 | 128 | 2020 |
A review on language models as knowledge bases B AlKhamissi, M Li, A Celikyilmaz, M Diab, M Ghazvininejad arXiv preprint arXiv:2204.06031, 2022 | 127 | 2022 |
Non-autoregressive machine translation with disentangled context transformer J Kasai, J Cross, M Ghazvininejad, J Gu International conference on machine learning, 5144-5155, 2020 | 110* | 2020 |
Aligned cross entropy for non-autoregressive machine translation M Ghazvininejad, V Karpukhin, L Zettlemoyer, O Levy International Conference on Machine Learning, 3515-3523, 2020 | 108 | 2020 |
Training on synthetic noise improves robustness to natural noise in machine translation V Karpukhin, O Levy, J Eisenstein, M Ghazvininejad Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), 42-47, 2019 | 106 | 2019 |
Improving zero and few-shot abstractive summarization with intermediate fine-tuning and data augmentation AR Fabbri, S Han, H Li, H Li, M Ghazvininejad, S Joty, D Radev, ... arXiv preprint arXiv:2010.12836, 2020 | 94 | 2020 |
Natural language to code translation with execution F Shi, D Fried, M Ghazvininejad, L Zettlemoyer, SI Wang arXiv preprint arXiv:2204.11454, 2022 | 70 | 2022 |
Prompting contrastive explanations for commonsense reasoning tasks B Paranjape, J Michael, M Ghazvininejad, L Zettlemoyer, H Hajishirzi arXiv preprint arXiv:2106.06823, 2021 | 70 | 2021 |
Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension, 2019 M Lewis, Y Liu, N Goyal, M Ghazvininejad, A Mohamed, O Levy, ... arXiv preprint arXiv:1910.13461, 1910 | 64 | 1910 |
Semi-autoregressive training improves mask-predict decoding M Ghazvininejad, O Levy, L Zettlemoyer arXiv preprint arXiv:2001.08785, 2020 | 63 | 2020 |