关注
Yanai Elazar
Yanai Elazar
Postdoctoral Researcher at AI2 & UW
在 macs.biu.ac.il 的电子邮件经过验证 - 首页
标题
引用次数
年份
A survey on data selection for language models
A Albalak, Y Elazar, SM Xie, S Longpre, N Lambert, X Wang, ...
arXiv preprint arXiv:2402.16827, 2024
242024
A taxonomy and review of generalization research in NLP
D Hupkes, M Giulianelli, V Dankers, M Artetxe, Y Elazar, T Pimentel, ...
Nature Machine Intelligence 5 (10), 1161-1174, 2023
89*2023
Adversarial removal of demographic attributes from text data
Y Elazar, Y Goldberg
arXiv preprint arXiv:1808.06640, 2018
3482018
Adversarial removal of demographic attributes revisited
M Barrett, Y Kementchedjhieva, Y Elazar, D Elliott, A Søgaard
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
552019
Adversarial user representations in recommender machine learning models
YS Resheff, S Shahar, OS Shalom, Y Elazar
US Patent 11,494,701, 2022
22022
Amnesic probing: Behavioral explanation with amnesic counterfactuals
Y Elazar, S Ravfogel, A Jacovi, Y Goldberg
Transactions of the Association for Computational Linguistics 9, 160-175, 2021
2012021
Applying Intrinsic Debiasing on Downstream Tasks: Challenges and Considerations for Machine Translation
B Iluz, Y Elazar, A Yehudai, G Stanovsky
arXiv preprint arXiv:2406.00787, 2024
2024
At your fingertips: Extracting piano fingering instructions from videos
A Moryossef, Y Elazar, Y Goldberg
arXiv preprint arXiv:2303.03745, 2023
6*2023
Back to square one: Artifact detection, training and commonsense disentanglement in the winograd schema
Y Elazar, H Zhang, Y Goldberg, D Roth
arXiv preprint arXiv:2104.08161, 2021
462021
Backtracking Mathematical Reasoning of Language Models to the Pretraining Data
Y Razeghi, H Ivison, S Singh, Y Elazar
The Second Tiny Papers Track at ICLR 2024, 0
Calibrating large language models with sample consistency
Q Lyu, K Shridhar, C Malaviya, L Zhang, Y Elazar, N Tandon, ...
arXiv preprint arXiv:2402.13904, 2024
32024
CIKQA: Learning commonsense inference with a unified knowledge-in-the-loop QA paradigm
H Zhang, Y Huo, Y Elazar, Y Song, Y Goldberg, D Roth
arXiv preprint arXiv:2210.06246, 2022
12022
Contrastive explanations for model interpretability
A Jacovi, S Swayamdipta, S Ravfogel, Y Elazar, Y Choi, Y Goldberg
arXiv preprint arXiv:2103.01378, 2021
962021
Detection and measurement of syntactic templates in generated text
C Shaib, Y Elazar, JJ Li, BC Wallace
arXiv preprint arXiv:2407.00211, 2024
22024
Do language embeddings capture scales?
X Zhang, D Ramachandran, I Tenney, Y Elazar, D Roth
arXiv preprint arXiv:2010.05345, 2020
792020
Dolma: An open corpus of three trillion tokens for language model pretraining research
L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ...
arXiv preprint arXiv:2402.00159, 2024
382024
Estimating the Causal Effect of Early ArXiving on Paper Acceptance
Y Elazar, J Zhang, D Wadden, B Zhang, NA Smith
Causal Learning and Reasoning, 913-933, 2024
22024
Evaluating -Gram Novelty of Language Models Using Rusty-DAWG
W Merrill, NA Smith, Y Elazar
arXiv preprint arXiv:2406.13069, 2024
12024
Evaluating models' local decision boundaries via contrast sets
M Gardner, Y Artzi, V Basmova, J Berant, B Bogin, S Chen, P Dasigi, ...
arXiv preprint arXiv:2004.02709, 2020
4542020
Few-shot fine-tuning vs. in-context learning: A fair comparison and evaluation
M Mosbach, T Pimentel, S Ravfogel, D Klakow, Y Elazar
arXiv preprint arXiv:2305.16938, 2023
562023
系统目前无法执行此操作,请稍后再试。
文章 1–20