关注
Qinyuan Ye
Qinyuan Ye
在 usc.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP
Q Ye, BY Lin, X Ren
EMNLP 2021, 2021
1462021
Refining language models with compositional explanations
H Yao, Y Chen, Q Ye, X Jin, X Ren
NeurIPS 2021, 2021
43*2021
Learning from Explanations with Neural Execution Tree
Z Wang, Y Qin, W Zhou, J Yan, Q Ye, L Neves, Z Liu, X Ren
ICLR 2020, 2019
43*2019
Learning to Generate Task-Specific Adapters from Task Description
Q Ye, X Ren
ACL-IJCNLP 2021 (Short Paper), 2021
30*2021
Teaching Machine Comprehension with Compositional Explanations
Q Ye, X Huang, E Boschee, X Ren
Findings of EMNLP 2020, 2020
302020
Semi-automated protocol disambiguation and code generation
J Yen, T Lévai, Q Ye, X Ren, R Govindan, B Raghavan
SIGCOMM 2021, 272-286, 2021
272021
Prompt engineering a prompt engineer
Q Ye, M Axmed, R Pryzant, F Khani
arXiv preprint arXiv:2311.05661, 2023
252023
Looking Beyond Label Noise: Shifted Label Distribution Matters in Distantly Supervised Relation Extraction
Q Ye, L Liu, M Zhang, X Ren
EMNLP-IJCNLP 2019, 2019
212019
LEAN-LIFE: A Label-Efficient Annotation Framework Towards Learning from Explanation
DH Lee, R Khanna, BY Lin, J Chen, S Lee, Q Ye, E Boschee, L Neves, ...
ACL 2020 (Demo Track), 2020
192020
On the Influence of Masking Policies in Intermediate Pre-training
Q Ye, BZ Li, S Wang, B Bolte, H Ma, W Yih, X Ren, M Khabsa
EMNLP 2021, 2021
132021
Eliciting and Understanding Cross-Task Skills with Task-Level Mixture-of-Experts
Q Ye, J Zha, X Ren
Findings of EMNLP 2022, 2022
12*2022
Studying strategically: Learning to mask for closed-book QA
Q Ye, BZ Li, S Wang, B Bolte, H Ma, W Yih, X Ren, M Khabsa
arXiv preprint arXiv:2012.15856, 2020
92020
How Predictable Are Large Language Model Capabilities? A Case Study on BIG-bench
Q Ye, HY Fu, X Ren, R Jia
Findings of EMNLP 2023, 2023
62023
FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning
Q Ye, I Beltagy, ME Peters, X Ren, H Hajishirzi
ACL 2023, 2022
62022
Estimating Large Language Model Capabilities without Labeled Test Data
HY Fu, Q Ye, A Xu, X Ren, R Jia
Findings of EMNLP 2023, 2023
52023
LLM-driven Instruction Following: Progresses and Concerns
W Yin, Q Ye, P Liu, X Ren, H Schütze
Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023
42023
Sparse Distillation: Speeding Up Text Classification by Using Bigger Student Models
Q Ye, M Khabsa, M Lewis, S Wang, X Ren, A Jaech
NAACL 2022, 2021
22021
Stress-Testing Long-Context Language Models with Lifelong ICL and Task Haystack
X Xu, Q Ye, X Ren
arXiv preprint arXiv:2407.16695, 2024
2024
Cross-Task Generalization Abilities of Large Language Models
Q Ye
Proceedings of the 2024 Conference of the North American Chapter of the …, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–19