WizardLM: Empowering Large Language Models to Follow Complex Instructions C Xu*, Q Sun*, K Zheng*, X Geng, P Zhao, J Feng, C Tao, D Jiang Proc. ICLR, 2024 | 467 | 2024 |
WizardCoder: Empowering Code Large Language Models with Evol-Instruct Z Luo, C Xu, P Zhao, Q Sun, X Geng, W Hu, C Tao, J Ma, Q Lin, D Jiang Proc. ICLR, 2024 | 282 | 2024 |
RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Systems C Tao, L Mou, D Zhao, R Yan Proc. AAAI, 722--729, 2018 | 249 | 2018 |
Knowledge-Grounded Dialogue Generation with Pre-trained Language Models X Zhao, W Wu, C Xu, C Tao, D Zhao, R Yan Proc. EMNLP, 2020 | 201 | 2020 |
Overcoming Catastrophic Forgetting for Continual Learning via Model Adaptation W Hu*, Z Lin*, B Liu*, C Tao, Z Tao, J Ma, D Zhao, R Yan Proc. ICLR, 2018 | 164 | 2018 |
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct H Luo, Q Sun, C Xu, P Zhao, J Lou, C Tao, X Geng, Q Lin, S Chen, ... arXiv preprint arXiv:2308.09583, 2023 | 159 | 2023 |
Get The Point of My Utterance! Learning Towards Effective Responses with Multi-Head Attention Mechanism. C Tao, S Gao, M Shang, W Wu, D Zhao, R Yan Proc. IJCAI, 4418-4424, 2018 | 150 | 2018 |
Multi-Representation Fusion Network for Multi-Turn Response Selection in Retrieval-Based Chatbots C Tao, W Wu, C Xu, W Hu, D Zhao, R Yan Proc. WSDM, 267-275, 2019 | 149 | 2019 |
One Time of Interaction May Not be Enough: Go Deep With an Interaction-over-interaction Network for Response Selection in Dialogues C Tao, W Wu, C Xu, W Hu, D Zhao, R Yan Proc. ACL, 2019 | 132 | 2019 |
Low-Resource Knowledge-Grounded Dialogue Generation X Zhao, W Wu, C Tao, C Xu, D Zhao, R Yan Proc. ICLR, 2020 | 107 | 2020 |
PromDA: Prompt-based Data Augmentation for Low-Resource NLU Tasks Y Wang, C Xu, Q Sun, H Hu, C Tao, X Geng, D Jiang Proc. ACL, 2022 | 73 | 2022 |
Zero-Resource Knowledge-Grounded Dialogue Generation L Li, C Xu, W Wu, Y Zhao, X Zhao, C Tao Proc. NeurIPS, 2020 | 73 | 2020 |
Learning an Effective Context-Response Matching Model with Self-Supervised Tasks for Retrieval-based Dialogues R Xu, C Tao, D Jiang, X Zhao, D Zhao, R Yan Proc. AAAI, 2021 | 64 | 2021 |
A Document-grounded Matching Network for Response Selection in Retrieval-based Chatbots X Zhao*, C Tao*, W Wu, C Xu, D Zhao, R Yan Proc. IJCAI, 2019 | 43 | 2019 |
Iterative Document Representation Learning Towards Summarization with Polishing X Chen, S Gao, C Tao, Y Song, D Zhao, R Yan Proc. EMNLP, 2018 | 43 | 2018 |
MPC-BERT: A Pre-Trained Language Model for Multi-Party Conversation Understanding JC Gu, C Tao, ZH Ling, C Xu, X Geng, D Jiang Proc. ACL, 2021 | 41 | 2021 |
Multi-Granularity Structural Knowledge Distillation for Language Model Compression C Liu, C Tao, J Feng, D Zhao Proc. ACL, 1001-1011, 2022 | 40 | 2022 |
Neural Response Generation with Meta-Words C Xu, W Wu, C Tao, H Hu, M Schuerman, Y Wang Proc. ACL, 2019 | 40 | 2019 |
Sampling Matters! An Empirical Study of Negative Sampling Strategies for Learning of Matching Models in Retrieval-based Dialogue Systems J Li, C Tao, Y Feng, D Zhao, R Yan Proc. EMNLP, 1291-1296, 2019 | 34 | 2019 |
Learning a Matching Model with Co-teaching for Multi-turn Response Selection in Retrieval-based Dialogue Systems J Feng*, C Tao*, W Wu, Y Feng, D Zhao, R Yan Proc. ACL, 2019 | 32 | 2019 |