关注
Iz Beltagy
Iz Beltagy
Allen Institute for Artificial Intelligence
在 beltagy.net 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Longformer: The long-document transformer
I Beltagy, ME Peters, A Cohan
arXiv preprint arXiv:2004.05150, 2020
36472020
SciBERT: A Pretrained Language Model for Scientific Text
I Beltagy, K Lo, A Cohan
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
34572019
Don't stop pretraining: Adapt language models to domains and tasks
S Gururangan, A Marasović, S Swayamdipta, K Lo, I Beltagy, D Downey, ...
arXiv preprint arXiv:2004.10964, 2020
20892020
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
13062023
ScispaCy: fast and robust models for biomedical natural language processing
M Neumann, D King, I Beltagy, W Ammar
arXiv preprint arXiv:1902.07669, 2019
7402019
Construction of the literature graph in semantic scholar
W Ammar, D Groeneveld, C Bhagavatula, I Beltagy, M Crawford, ...
arXiv preprint arXiv:1805.02262, 2018
4692018
Specter: Document-level representation learning using citation-informed transformers
A Cohan, S Feldman, I Beltagy, D Downey, DS Weld
arXiv preprint arXiv:2004.07180, 2020
4472020
How far can camels go? exploring the state of instruction tuning on open resources
Y Wang, H Ivison, P Dasigi, J Hessel, T Khot, K Chandu, D Wadden, ...
Advances in Neural Information Processing Systems 36, 74764-74786, 2023
1712023
PRIMERA: Pyramid-based masked sentence pre-training for multi-document summarization
W Xiao, I Beltagy, G Carenini, A Cohan
arXiv preprint arXiv:2110.08499, 2021
1512021
A dataset of information-seeking questions and answers anchored in research papers
P Dasigi, K Lo, I Beltagy, A Cohan, NA Smith, M Gardner
arXiv preprint arXiv:2105.03011, 2021
1502021
SciREX: A challenge dataset for document-level information extraction
S Jain, M Van Zuylen, H Hajishirzi, I Beltagy
arXiv preprint arXiv:2005.00512, 2020
1452020
Pretrained language models for sequential sentence classification
A Cohan, I Beltagy, D King, B Dalvi, DS Weld
arXiv preprint arXiv:1909.04054, 2019
1432019
Machine learning for reliable mmwave systems: Blockage prediction and proactive handoff
A Alkhateeb, I Beltagy, S Alex
2018 IEEE Global conference on signal and information processing (GlobalSIP …, 2018
1412018
What language model architecture and pretraining objective works best for zero-shot generalization?
T Wang, A Roberts, D Hesslow, T Le Scao, HW Chung, I Beltagy, ...
International Conference on Machine Learning, 22964-22984, 2022
1262022
Montague meets Markov: Deep semantics with probabilistic logical form
I Beltagy, C Chau, G Boleda, D Garrette, K Erk, R Mooney
2nd Joint Conference on Lexical and Computational Semantics: *SEM 2013, 11-21, 2013
1082013
Flex: Unifying evaluation for few-shot nlp
J Bragg, A Cohan, K Lo, I Beltagy
Advances in Neural Information Processing Systems 34, 15787-15800, 2021
1002021
Ms2: Multi-document summarization of medical studies
J DeYoung, I Beltagy, M van Zuylen, B Kuehl, LL Wang
arXiv preprint arXiv:2104.06486, 2021
992021
Introducing CAD: the contextual abuse dataset
B Vidgen, D Nguyen, H Margetts, P Rossini, R Tromble
902021
Few-shot self-rationalization with natural language prompts
A Marasović, I Beltagy, D Downey, ME Peters
arXiv preprint arXiv:2111.08284, 2021
862021
What language model to train if you have one million gpu hours?
TL Scao, T Wang, D Hesslow, L Saulnier, S Bekman, MS Bari, ...
arXiv preprint arXiv:2210.15424, 2022
852022
系统目前无法执行此操作,请稍后再试。
文章 1–20