Deberta: Decoding-enhanced bert with disentangled attention P He, X Liu, J Gao, W Chen ICLR 2021, 2020 | 2114 | 2020 |
On the variance of the adaptive learning rate and beyond L Liu, H Jiang, P He, W Chen, X Liu, J Gao, J Han ICLR 2019, 2019 | 2080 | 2019 |
Multi-task deep neural networks for natural language understanding X Liu, P He, W Chen, J Gao ACL 2019, 2019 | 1354 | 2019 |
Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing P He, J Gao, W Chen ICLR 2023, 2021 | 656 | 2021 |
Instruction tuning with gpt-4 B Peng, C Li, P He, M Galley, J Gao arXiv preprint arXiv:2304.03277, 2023 | 525 | 2023 |
Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization H Jiang, P He, W Chen, X Liu, J Gao, T Zhao ACL 2020, 2019 | 438 | 2019 |
Check your facts and try again: Improving large language models with external knowledge and automated feedback B Peng, M Galley, P He, H Cheng, Y Xie, Y Hu, Q Huang, L Liden, Z Yu, ... arXiv preprint arXiv:2302.12813, 2023 | 271 | 2023 |
Improving multi-task deep neural networks via knowledge distillation for natural language understanding X Liu, P He, W Chen, J Gao arXiv preprint arXiv:1904.09482, 2019 | 207 | 2019 |
Adaptive budget allocation for parameter-efficient fine-tuning Q Zhang, M Chen, A Bukharin, P He, Y Cheng, W Chen, T Zhao International Conference on Learning Representations, 2023 | 186 | 2023 |
Generation-augmented retrieval for open-domain question answering Y Mao, P He, X Liu, Y Shen, J Gao, J Han, W Chen arXiv preprint arXiv:2009.08553, 2020 | 175 | 2020 |
Adversarial training for large neural language models X Liu, H Cheng, P He, W Chen, Y Wang, H Poon, J Gao arXiv preprint arXiv:2004.08994, 2020 | 171 | 2020 |
Diffusion-GAN: Training GANs with Diffusion Z Wang, H Zheng, P He, W Chen, M Zhou ICLR 2023, 2022 | 166 | 2022 |
X-SQL: reinforce schema representation with context P He, Y Mao, K Chakrabarti, W Chen arXiv preprint arXiv:1908.08113, 2019 | 103 | 2019 |
On the variance of the adaptive learning rate and beyond. arXiv 2019 L Liu, H Jiang, P He, W Chen, X Liu, J Gao, J Han arXiv preprint arXiv:1908.03265, 1908 | 91 | 1908 |
Dola: Decoding by contrasting layers improves factuality in large language models YS Chuang, Y Xie, H Luo, Y Kim, J Glass, P He arXiv preprint arXiv:2309.03883, 2023 | 88 | 2023 |
NeurIPS 2020 EfficientQA competition: Systems, analyses and lessons learned S Min, J Boyd-Graber, C Alberti, D Chen, E Choi, M Collins, K Guu, ... NeurIPS 2020, 2021 | 68 | 2021 |
Exploiting structured knowledge in text via graph-guided representation learning T Shen, Y Mao, P He, G Long, A Trischler, W Chen arXiv preprint arXiv:2004.14224, 2020 | 62 | 2020 |
Godel: Large-scale pre-training for goal-directed dialog B Peng, M Galley, P He, C Brockett, L Liden, E Nouri, Z Yu, B Dolan, ... arXiv preprint arXiv:2206.11309, 2022 | 58 | 2022 |
Platon: Pruning large transformer models with upper confidence bound of weight importance Q Zhang, S Zuo, C Liang, A Bukharin, P He, W Chen, T Zhao International Conference on Machine Learning, 26809-26823, 2022 | 56 | 2022 |
The microsoft toolkit of multi-task deep neural networks for natural language understanding X Liu, Y Wang, J Ji, H Cheng, X Zhu, E Awa, P He, W Chen, H Poon, ... arXiv preprint arXiv:2002.07972, 2020 | 53 | 2020 |