Multitask prompted training enables zero-shot task generalization V Sanh, A Webson, C Raffel, SH Bach, L Sutawika, Z Alyafeai, A Chaffin, ... arXiv preprint arXiv:2110.08207, 2021 | 1445 | 2021 |
Bloom: A 176b-parameter open-access multilingual language model T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ... | 1376 | 2023 |
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ... arXiv preprint arXiv:2206.04615, 2022 | 900 | 2022 |
Nl-augmenter: A framework for task-sensitive natural language augmentation KD Dhole, V Gangal, S Gehrmann, A Gupta, Z Li, S Mahamood, ... arXiv preprint arXiv:2112.02721, 2021 | 67 | 2021 |
Socratic questioning of novice debuggers: A benchmark dataset and preliminary evaluations E Al-Hossami, R Bunescu, R Teehan, L Powell, K Mahajan, M Dorodchi Proceedings of the 18th Workshop on Innovative Use of NLP for Building …, 2023 | 14 | 2023 |
Emergent structures and training dynamics in large language models R Teehan, M Clinciu, O Serikov, E Szczechla, N Seelam, S Mirkin, ... Proceedings of BigScience Episode# 5--Workshop on Challenges & Perspectives …, 2022 | 14 | 2022 |
Cut the carp: Fishing for zero-shot story evaluation S Matiana, JR Smith, R Teehan, L Castricato, S Biderman, L Gao, ... arXiv preprint arXiv:2110.03111, 2021 | 14 | 2021 |
Multitask prompted training enables zero-shot task generalization. arXiv V Sanh, A Webson, C Raffel, SH Bach, L Sutawika, Z Alyafeai, A Chaffin, ... arXiv preprint arXiv:2110.08207, 2021 | 11 | 2021 |
Can language models employ the socratic method? experiments with code debugging E Al-Hossami, R Bunescu, J Smith, R Teehan Proceedings of the 55th ACM Technical Symposium on Computer Science …, 2024 | 7 | 2024 |
CoLLEGe: Concept Embedding Generation for Large Language Models R Teehan, B Lake, M Ren arXiv preprint arXiv:2403.15362, 2024 | 1 | 2024 |
SPICES: SURVEY PAPERS AS INTERACTIVE CHEATSHEET EMBEDDINGS M McAteer, R Teehan Beyond static papers: Rethinking how we share scientific understanding in ML …, 0 | | |