关注
Pengcheng He
Pengcheng He
在 microsoft.com 的电子邮件经过验证
标题
引用次数
引用次数
年份
Deberta: Decoding-enhanced bert with disentangled attention
P He, X Liu, J Gao, W Chen
ICLR 2021, 2020
21142020
On the variance of the adaptive learning rate and beyond
L Liu, H Jiang, P He, W Chen, X Liu, J Gao, J Han
ICLR 2019, 2019
20802019
Multi-task deep neural networks for natural language understanding
X Liu, P He, W Chen, J Gao
ACL 2019, 2019
13542019
Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing
P He, J Gao, W Chen
ICLR 2023, 2021
6562021
Instruction tuning with gpt-4
B Peng, C Li, P He, M Galley, J Gao
arXiv preprint arXiv:2304.03277, 2023
5252023
Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization
H Jiang, P He, W Chen, X Liu, J Gao, T Zhao
ACL 2020, 2019
4382019
Check your facts and try again: Improving large language models with external knowledge and automated feedback
B Peng, M Galley, P He, H Cheng, Y Xie, Y Hu, Q Huang, L Liden, Z Yu, ...
arXiv preprint arXiv:2302.12813, 2023
2712023
Improving multi-task deep neural networks via knowledge distillation for natural language understanding
X Liu, P He, W Chen, J Gao
arXiv preprint arXiv:1904.09482, 2019
2072019
Adaptive budget allocation for parameter-efficient fine-tuning
Q Zhang, M Chen, A Bukharin, P He, Y Cheng, W Chen, T Zhao
International Conference on Learning Representations, 2023
1862023
Generation-augmented retrieval for open-domain question answering
Y Mao, P He, X Liu, Y Shen, J Gao, J Han, W Chen
arXiv preprint arXiv:2009.08553, 2020
1752020
Adversarial training for large neural language models
X Liu, H Cheng, P He, W Chen, Y Wang, H Poon, J Gao
arXiv preprint arXiv:2004.08994, 2020
1712020
Diffusion-GAN: Training GANs with Diffusion
Z Wang, H Zheng, P He, W Chen, M Zhou
ICLR 2023, 2022
1662022
X-SQL: reinforce schema representation with context
P He, Y Mao, K Chakrabarti, W Chen
arXiv preprint arXiv:1908.08113, 2019
1032019
On the variance of the adaptive learning rate and beyond. arXiv 2019
L Liu, H Jiang, P He, W Chen, X Liu, J Gao, J Han
arXiv preprint arXiv:1908.03265, 1908
911908
Dola: Decoding by contrasting layers improves factuality in large language models
YS Chuang, Y Xie, H Luo, Y Kim, J Glass, P He
arXiv preprint arXiv:2309.03883, 2023
882023
NeurIPS 2020 EfficientQA competition: Systems, analyses and lessons learned
S Min, J Boyd-Graber, C Alberti, D Chen, E Choi, M Collins, K Guu, ...
NeurIPS 2020, 2021
682021
Exploiting structured knowledge in text via graph-guided representation learning
T Shen, Y Mao, P He, G Long, A Trischler, W Chen
arXiv preprint arXiv:2004.14224, 2020
622020
Godel: Large-scale pre-training for goal-directed dialog
B Peng, M Galley, P He, C Brockett, L Liden, E Nouri, Z Yu, B Dolan, ...
arXiv preprint arXiv:2206.11309, 2022
582022
Platon: Pruning large transformer models with upper confidence bound of weight importance
Q Zhang, S Zuo, C Liang, A Bukharin, P He, W Chen, T Zhao
International Conference on Machine Learning, 26809-26823, 2022
562022
The microsoft toolkit of multi-task deep neural networks for natural language understanding
X Liu, Y Wang, J Ji, H Cheng, X Zhu, E Awa, P He, W Chen, H Poon, ...
arXiv preprint arXiv:2002.07972, 2020
532020
系统目前无法执行此操作,请稍后再试。
文章 1–20