Codebert: A pre-trained model for programming and natural languages Z Feng, D Guo, D Tang, N Duan, X Feng, M Gong, L Shou, B Qin, T Liu, ... arXiv preprint arXiv:2002.08155, 2020 | 2137 | 2020 |
Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training G Li, N Duan, Y Fang, M Gong, D Jiang Proceedings of the AAAI Conference on Artificial Intelligence 34 (07), 11336 …, 2020 | 886 | 2020 |
Graphcodebert: Pre-training code representations with data flow D Guo, S Ren, S Lu, Z Feng, D Tang, S Liu, L Zhou, N Duan, ... arXiv preprint arXiv:2009.08366, 2020 | 810 | 2020 |
Codexglue: A machine learning benchmark dataset for code understanding and generation S Lu, D Guo, S Ren, J Huang, A Svyatkovskiy, A Blanco, C Clement, ... arXiv preprint arXiv:2102.04664, 2021 | 628 | 2021 |
K-adapter: Infusing knowledge into pre-trained models with adapters R Wang, D Tang, N Duan, Z Wei, X Huang, G Cao, D Jiang, M Zhou arXiv preprint arXiv:2002.01808, 2020 | 514 | 2020 |
Visual chatgpt: Talking, drawing and editing with visual foundation models C Wu, S Yin, W Qi, X Wang, Z Tang, N Duan arXiv preprint arXiv:2303.04671, 2023 | 497 | 2023 |
Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training W Qi, Y Yan, Y Gong, D Liu, N Duan, J Chen, R Zhang, M Zhou arXiv preprint arXiv:2001.04063, 2020 | 441 | 2020 |
Univl: A unified video and language pre-training model for multimodal understanding and generation H Luo, L Ji, B Shi, H Huang, N Duan, T Li, J Li, T Bharti, M Zhou arXiv preprint arXiv:2002.06353, 2020 | 422 | 2020 |
Unixcoder: Unified cross-modal pre-training for code representation D Guo, S Lu, N Duan, Y Wang, M Zhou, J Yin arXiv preprint arXiv:2203.03850, 2022 | 395 | 2022 |
CLIP4Clip: An empirical study of CLIP for end to end video clip retrieval and captioning H Luo, L Ji, M Zhong, Y Chen, W Lei, N Duan, T Li Neurocomputing 508, 293-304, 2022 | 384 | 2022 |
Question generation for question answering N Duan, D Tang, P Chen, M Zhou Proceedings of the 2017 conference on empirical methods in natural language …, 2017 | 332 | 2017 |
Codexglue: A machine learning benchmark dataset for code understanding and generation S Lu, D Guo, S Ren, J Huang, A Svyatkovskiy, A Blanco, C Clement, ... arXiv preprint arXiv:2102.04664, 2021 | 301* | 2021 |
Xglue: A new benchmark dataset for cross-lingual pre-training, understanding and generation Y Liang, N Duan, Y Gong, N Wu, F Guo, W Qi, M Gong, L Shou, D Jiang, ... arXiv preprint arXiv:2004.01401, 2020 | 293 | 2020 |
Clip4clip: An empirical study of clip for end to end video clip retrieval H Luo, L Ji, M Zhong, Y Chen, W Lei, N Duan, T Li arXiv preprint arXiv:2104.08860, 2021 | 283 | 2021 |
Constraint-based question answering with knowledge graph J Bao, N Duan, Z Yan, M Zhou, T Zhao Proceedings of COLING 2016, the 26th international conference on …, 2016 | 277 | 2016 |
Imagebert: Cross-modal pre-training with large-scale weak-supervised image-text data D Qi, L Su, J Song, E Cui, T Bharti, A Sacheti arXiv preprint arXiv:2001.07966, 2020 | 264 | 2020 |
Nüwa: Visual synthesis pre-training for neural visual world creation C Wu, J Liang, L Ji, F Yang, Y Fang, D Jiang, N Duan European conference on computer vision, 720-736, 2022 | 262 | 2022 |
Pretraining-based natural language generation for text summarization H Zhang, J Xu, J Wang arXiv preprint arXiv:1902.09243, 2019 | 254 | 2019 |
Building task-oriented dialogue systems for online shopping Z Yan, N Duan, P Chen, M Zhou, J Zhou, Z Li Proceedings of the AAAI conference on artificial intelligence 31 (1), 2017 | 229 | 2017 |
Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, and Ming Zhou. 2021. GraphCodeBERT: Pre-training Code Representations with Data Flow D Guo, S Ren, S Lu, Z Feng, D Tang, S Liu 9th International Conference on Learning Representations, ICLR, 3-7, 2021 | 225 | 2021 |