CommonGen: A constrained text generation challenge for generative commonsense reasoning BY Lin, W Zhou, M Shen, P Zhou, C Bhagavatula, Y Choi, X Ren EMNLP 2020 (Findings), 2019 | 336 | 2019 |
BERT Loses Patience: Fast and Robust Inference with Early Exit W Zhou*, C Xu*, T Ge, J McAuley, K Xu, F Wei NeurIPS 2020, 2020 | 272 | 2020 |
Bert-of-theseus: Compressing bert by progressive module replacing C Xu*, W Zhou*, T Ge, F Wei, M Zhou EMNLP 2020, 2020 | 205 | 2020 |
BERT-based lexical substitution W Zhou, T Ge, K Xu, F Wei, M Zhou Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019 | 110 | 2019 |
A Survey on Green Deep Learning J Xu*, W Zhou*, Z Fu*, H Zhou, L Li arXiv preprint arXiv:2111.05193, 2021 | 98* | 2021 |
BERT learns to teach: Knowledge distillation with meta learning W Zhou*, C Xu*, J McAuley Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022 | 67 | 2022 |
Pre-training text-to-text transformers for concept-centric common sense W Zhou*, DH Lee*, RK Selvam, S Lee, BY Lin, X Ren ICLR 2021, 2020 | 67 | 2020 |
Rolellm: Benchmarking, eliciting, and enhancing role-playing abilities of large language models ZM Wang, Z Peng, H Que, J Liu, W Zhou, Y Wu, H Guo, R Gan, Z Ni, ... arXiv preprint arXiv:2310.00746, 2023 | 60 | 2023 |
Beyond preserved accuracy: Evaluating loyalty and robustness of bert compression C Xu*, W Zhou*, T Ge, K Xu, J McAuley, F Wei EMNLP 2021, 2021 | 42 | 2021 |
Interactive natural language processing Z Wang, G Zhang, K Yang, N Shi, W Zhou, S Hao, G Xiong, Y Li, MY Sim, ... arXiv preprint arXiv:2305.13246, 2023 | 40 | 2023 |
X2-VLM: All-In-One Pre-trained Model For Vision-Language Tasks Y Zeng, X Zhang, H Li, J Wang, J Zhang, W Zhou IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023 | 39 | 2023 |
Agents: An Open-source Framework for Autonomous Language Agents W Zhou*, YE Jiang*, L Li*, J Wu*, T Wang, S Qiu, J Zhang, J Chen, R Wu, ... arXiv preprint arXiv:2309.07870, 2023 | 38 | 2023 |
Learning to compare for better training and evaluation of open domain natural language generation models W Zhou, K Xu Proceedings of the AAAI Conference on Artificial Intelligence 34 (05), 9717-9724, 2020 | 38 | 2020 |
Scheduled DropHead: A Regularization Method for Transformer Models W Zhou, T Ge, K Xu, F Wei, M Zhou EMNLP 2020 (Findings), 2020 | 37 | 2020 |
Controlled Text Generation with Natural Language Instructions W Zhou, YE Jiang, E Wilcox, R Cotterell, M Sachan ICML 2023, 2023 | 34 | 2023 |
Towards interpretable natural language understanding with explanations as latent variables W Zhou*, J Hu*, H Zhang*, X Liang, M Sun, C Xiong, J Tang NeurIPS 2020, 2020 | 32 | 2020 |
Improving grammatical error correction with machine translation pairs W Zhou, T Ge, C Mu, K Xu, F Wei, M Zhou EMNLP 2020 (Findings), 2019 | 32 | 2019 |
To repeat or not to repeat: Insights from scaling llm under token-crisis F Xue, Y Fu, W Zhou, Z Zheng, Y You Advances in Neural Information Processing Systems 36, 2024 | 30 | 2024 |
Self-Adversarial Learning with Comparative Discrimination for Text Generation W Zhou, T Ge, K Xu, F Wei, M Zhou ICLR 2020, 2020 | 30 | 2020 |
Recurrentgpt: Interactive generation of (arbitrarily) long text W Zhou, YE Jiang, P Cui, T Wang, Z Xiao, Y Hou, R Cotterell, M Sachan arXiv preprint arXiv:2305.13304, 2023 | 28 | 2023 |