关注
Niklas Muennighoff
Niklas Muennighoff
在 stu.pku.edu.cn 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Bloom: A 176b-parameter open-access multilingual language model
BS Workshop, TL Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, ...
JMLR 2023, 2022
1337*2022
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
TMLR 2023, 2022
8532022
StarCoder: may the source be with you!
R Li, LB Allal, Y Zi, N Muennighoff, D Kocetkov, C Mou, M Marone, C Akiki, ...
TMLR 2023, 2023
512*2023
Crosslingual generalization through multitask finetuning
N Muennighoff, T Wang, L Sutawika, A Roberts, S Biderman, TL Scao, ...
ACL 2023, 2022
4442022
A framework for few-shot language model evaluation
L Gao, J Tow, S Biderman, S Black, A DiPofi, C Foster, L Golding, J Hsu, ...
GitHub, 2021
415*2021
MTEB: Massive text embedding benchmark
N Muennighoff, N Tazi, L Magne, N Reimers
EACL 2023, 2022
2142022
SantaCoder: don't reach for the stars!
LB Allal, R Li, D Kocetkov, C Mou, C Akiki, CM Ferrandis, N Muennighoff, ...
ICLR 2023 DL4C Workshop, Best Paper Award, 2023
162*2023
SGPT: GPT sentence embeddings for semantic search
N Muennighoff
arXiv, 2022
1202022
C-Pack: Packaged Resources To Advance General Chinese Embedding
S Xiao, Z Liu, P Zhang, N Muennighoff
SIGIR 2024, 2023
1182023
Scaling Data-Constrained Language Models
N Muennighoff, AM Rush, B Barak, TL Scao, A Piktus, N Tazi, S Pyysalo, ...
NeurIPS 2023 Oral, Outstanding Paper Runner-Up Award, 2023
1022023
What Language Model to Train if You Have One Million GPU Hours?
TL Scao, T Wang, D Hesslow, L Saulnier, S Bekman, MS Bari, S Bideman, ...
EMNLP 2022 Findings, 2022
842022
Octopack: Instruction tuning code large language models
N Muennighoff, Q Liu, A Zebaze, Q Zheng, B Hui, TY Zhuo, S Singh, ...
ICLR 2024 Spotlight, NeurIPS 2023 Instruction Workshop, 2023
722023
Nl-augmenter: A framework for task-sensitive natural language augmentation
KD Dhole, V Gangal, S Gehrmann, A Gupta, Z Li, S Mahamood, ...
NEJLT 2023, 2021
662021
The hateful memes challenge: Competition report
D Kiela, H Firooz, A Mohan, V Goswami, A Singh, CA Fitzpatrick, P Bull, ...
NeurIPS 2020 Competitions, 2021
602021
Kto: Model alignment as prospect theoretic optimization
K Ethayarajh, W Xu, N Muennighoff, D Jurafsky, D Kiela
ICML 2024 Spotlight, 2024
592024
Vilio: state-of-the-art Visio-Linguistic models applied to hateful memes
N Muennighoff
NeurIPS 2020 Competitions, 2020
582020
Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ...
ACL 2024, 2024
44*2024
BLOOM+ 1: Adding Language Support to BLOOM for Zero-Shot Prompting
ZX Yong, H Schoelkopf, N Muennighoff, AF Aji, DI Adelani, K Almubarak, ...
ACL 2023, 2022
392022
StarCoder 2 and The Stack v2: The Next Generation
A Lozhkov, R Li, LB Allal, F Cassano, J Lamy-Poirier, N Tazi, A Tang, ...
arXiv preprint arXiv:2402.19173, 2024
342024
Olmo: Accelerating the science of language models
D Groeneveld, I Beltagy, P Walsh, A Bhagia, R Kinney, O Tafjord, AH Jha, ...
ACL 2024, 2024
302024
系统目前无法执行此操作,请稍后再试。
文章 1–20