Siren's song in the AI ocean: a survey on hallucination in large language models Y Zhang, Y Li, L Cui, D Cai, L Liu, T Fu, X Huang, E Zhao, Y Zhang, ... arXiv preprint arXiv:2309.01219, 2023 | 638 | 2023 |
Exploiting cross-sentence context for neural machine translation L Wang, Z Tu, A Way, Q Liu arXiv preprint arXiv:1704.04347, 2017 | 232 | 2017 |
Convolutional self-attention networks B Yang, L Wang, D Wong, LS Chao, Z Tu arXiv preprint arXiv:1904.03107, 2019 | 145 | 2019 |
UM-Corpus: A Large English-Chinese Parallel Corpus for Statistical Machine Translation. L Tian, DF Wong, LS Chao, P Quaresma, F Oliveira, L Yi, S Li, Y Wang, ... LREC, 1837-1842, 2014 | 137 | 2014 |
Macaw-llm: Multi-modal language modeling with image, audio, video, and text integration C Lyu, M Wu, L Wang, X Huang, B Liu, Z Du, S Shi, Z Tu arXiv preprint arXiv:2306.09093, 2023 | 120 | 2023 |
Document-level machine translation with large language models L Wang, C Lyu, T Ji, Z Zhang, D Yu, S Shi, Z Tu arXiv preprint arXiv:2304.02210, 2023 | 120 | 2023 |
Understanding and improving lexical choice in non-autoregressive translation L Ding, L Wang, X Liu, DF Wong, D Tao, Z Tu arXiv preprint arXiv:2012.14583, 2020 | 116 | 2020 |
Modeling recurrence for transformer J Hao, X Wang, B Yang, L Wang, J Zhang, Z Tu arXiv preprint arXiv:1904.03092, 2019 | 92 | 2019 |
Self-attention with cross-lingual position representation L Ding, L Wang, D Tao arXiv preprint arXiv:2004.13310, 2020 | 79 | 2020 |
Self-attention with structural position representations X Wang, Z Tu, L Wang, S Shi arXiv preprint arXiv:1909.00383, 2019 | 78 | 2019 |
Context-aware cross-attention for non-autoregressive translation L Ding, L Wang, D Wu, D Tao, Z Tu arXiv preprint arXiv:2011.00770, 2020 | 71 | 2020 |
Rejuvenating low-frequency words: Making the most of parallel data in non-autoregressive translation L Ding, L Wang, X Liu, DF Wong, D Tao, Z Tu arXiv preprint arXiv:2106.00903, 2021 | 64 | 2021 |
Redistributing low-frequency words: Making the most of monolingual data in non-autoregressive translation L Ding, L Wang, S Shi, D Tao, Z Tu Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022 | 60 | 2022 |
New trends in machine translation using large language models: Case examples with chatgpt C Lyu, J Xu, L Wang arXiv preprint arXiv:2305.01181, 2023 | 54* | 2023 |
Dynamic layer aggregation for neural machine translation with routing-by-agreement ZY Dou, Z Tu, X Wang, L Wang, S Shi, T Zhang Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 86-93, 2019 | 53 | 2019 |
Progressive multi-granularity training for non-autoregressive translation L Ding, L Wang, X Liu, DF Wong, D Tao, Z Tu arXiv preprint arXiv:2106.05546, 2021 | 50 | 2021 |
Translating pro-drop languages with reconstruction models L Wang, Z Tu, S Shi, T Zhang, Y Graham, Q Liu Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018 | 49 | 2018 |
A novel approach to dropped pronoun translation L Wang, Z Tu, X Zhang, H Li, A Way, Q Liu arXiv preprint arXiv:1604.06285, 2016 | 48 | 2016 |
Towards understanding neural machine translation with word importance S He, Z Tu, X Wang, L Wang, MR Lyu, S Shi arXiv preprint arXiv:1909.00326, 2019 | 45 | 2019 |
Assessing the ability of self-attention networks to learn word order B Yang, L Wang, DF Wong, LS Chao, Z Tu arXiv preprint arXiv:1906.00592, 2019 | 42 | 2019 |