On the sentence embeddings from pre-trained language models B Li, H Zhou, J He, M Wang, Y Yang, L Li arXiv preprint arXiv:2011.05864, 2020 | 601 | 2020 |
Deep semantic role labeling with self-attention Z Tan, M Wang, J Xie, Y Chen, X Shi Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018 | 391 | 2018 |
Contrastive learning for many-to-many multilingual neural machine translation X Pan, M Wang, L Wu, L Li arXiv preprint arXiv:2105.09501, 2021 | 167 | 2021 |
Towards Making the Most of BERT in Neural Machine Translation JYMWH Zhou, CZWZY Yu, L Li | 159* | 2020 |
Encoding source language with convolutional neural network for machine translation F Meng, Z Lu, M Wang, H Li, W Jiang, Q Liu arXiv preprint arXiv:1503.01838, 2015 | 145 | 2015 |
Glancing transformer for non-autoregressive neural machine translation L Qian, H Zhou, Y Bao, M Wang, L Qiu, W Zhang, Y Yu, L Li arXiv preprint arXiv:2008.07905, 2020 | 138 | 2020 |
Pre-training multilingual neural machine translation by leveraging alignment information Z Lin, X Pan, M Wang, X Qiu, J Feng, H Zhou, L Li arXiv preprint arXiv:2010.03142, 2020 | 116 | 2020 |
Syntax-based deep matching of short texts M Wang, Z Lu, H Li, Q Liu arXiv preprint arXiv:1503.02427, 2015 | 101 | 2015 |
A hierarchy-to-sequence attentional neural machine translation model J Su, J Zeng, D Xiong, Y Liu, M Wang, J Xie IEEE/ACM Transactions on Audio, Speech, and Language Processing 26 (3), 623-632, 2018 | 98 | 2018 |
Imitation learning for non-autoregressive neural machine translation B Wei, M Wang, H Zhou, J Lin, J Xie, X Sun arXiv preprint arXiv:1906.02041, 2019 | 95 | 2019 |
STEMM: Self-learning with speech-text manifold mixup for speech translation Q Fang, R Ye, L Li, Y Feng, M Wang arXiv preprint arXiv:2203.10426, 2022 | 80 | 2022 |
Learning language specific sub-network for multilingual machine translation Z Lin, L Wu, M Wang, L Li arXiv preprint arXiv:2105.09259, 2021 | 79 | 2021 |
Memory-enhanced decoder for neural machine translation M Wang, Z Lu, H Li, Q Liu arXiv preprint arXiv:1606.02003, 2016 | 79 | 2016 |
Cross-modal contrastive learning for speech translation R Ye, M Wang, L Li arXiv preprint arXiv:2205.02444, 2022 | 71 | 2022 |
End-to-end speech translation via cross-modal progressive training R Ye, M Wang, L Li arXiv preprint arXiv:2104.10380, 2021 | 70 | 2021 |
Learning shared semantic space for speech-to-text translation C Han, M Wang, H Ji, L Li arXiv preprint arXiv:2105.03095, 2021 | 67 | 2021 |
Rethinking document-level neural machine translation Z Sun, M Wang, H Zhou, C Zhao, S Huang, J Chen, L Li arXiv preprint arXiv:2010.08961, 2020 | 60 | 2020 |
Listen, understand and translate: Triple supervision decouples end-to-end speech-to-text translation Q Dong, R Ye, M Wang, H Zhou, S Xu, B Xu, L Li Proceedings of the AAAI Conference on Artificial Intelligence 35 (14), 12749 …, 2021 | 58 | 2021 |
LightSeq: A high performance inference library for transformers X Wang, Y Xiong, Y Wei, M Wang, L Li arXiv preprint arXiv:2010.13887, 2020 | 56 | 2020 |
Deep neural machine translation with linear associative unit M Wang, Z Lu, J Zhou, Q Liu arXiv preprint arXiv:1705.00861, 2017 | 52 | 2017 |