AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts T Shin, Y Razeghi, RL Logan IV, E Wallace, S Singh EMNLP 2020, 2020 | 1472 | 2020 |
Extracting Training Data from Large Language Models N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ... USENIX Security 2021, 2020 | 1406 | 2020 |
Calibrate Before Use: Improving Few-Shot Performance of Language Models TZ Zhao*, E Wallace*, S Feng, D Klein, S Singh ICML 2021, 2021 | 959 | 2021 |
Beyond the Imitation Game: Quantifying and Extrapolating the Capabilities of Language Models A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ... TMLR 2023, 2022 | 874 | 2022 |
Universal Adversarial Triggers for Attacking and Analyzing NLP E Wallace, S Feng, N Kandpal, M Gardner, S Singh EMNLP 2019, 2019 | 780 | 2019 |
Evaluating Models' Local Decision Boundaries via Contrast Sets M Gardner, Y Artzi, V Basmova, J Berant, B Bogin, S Chen, P Dasigi, ... EMNLP Findings 2020, 2020 | 448 | 2020 |
InCoder: A Generative Model for Code Infilling and Synthesis D Fried, A Aghajanyan, J Lin, S Wang, E Wallace, F Shi, R Zhong, W Yih, ... ICLR 2023, 2022 | 427 | 2022 |
Pretrained Transformers Improve Out-of-Distribution Robustness D Hendrycks, X Liu, E Wallace, A Dziedzic, R Krishnan, D Song ACL 2020, 2020 | 417 | 2020 |
Extracting Training Data from Diffusion Models N Carlini, J Hayes, M Nasr, M Jagielski, V Sehwag, F Tramer, B Balle, ... USENIX Security 2023, 2023 | 357 | 2023 |
Pathologies of Neural Models Make Interpretations Difficult S Feng, E Wallace, II Grissom, M Iyyer, P Rodriguez, J Boyd-Graber EMNLP 2018, 2018 | 353 | 2018 |
Do NLP Models Know Numbers? Probing Numeracy in Embeddings E Wallace*, Y Wang*, S Li, S Singh, M Gardner EMNLP 2019, 2019 | 284 | 2019 |
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers Z Li*, E Wallace*, S Shen*, K Lin*, K Keutzer, D Klein, JE Gonzalez ICML 2020, 2020 | 264 | 2020 |
Large Language Models Struggle to Learn Long-Tail Knowledge N Kandpal, H Deng, A Roberts, E Wallace, C Raffel ICML 2023, 2022 | 217 | 2022 |
Deduplicating Training Data Mitigates Privacy Risks in Language Models N Kandpal, E Wallace, C Raffel ICML 2022, 2022 | 177 | 2022 |
Koala: A Dialogue Model for Academic Research X Geng*, A Gudibande*, H Liu*, E Wallace*, P Abbeel, S Levine, D Song BAIR Blog, 2023 | 168 | 2023 |
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models RL Logan IV, I Balažević, E Wallace, F Petroni, S Singh, S Riedel ACL Findings 2022, 2021 | 168 | 2021 |
Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples E Wallace, P Rodriguez, S Feng, I Yamada, J Boyd-Graber TACL 2019, 2019 | 165* | 2019 |
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models E Wallace, J Tuyls, J Wang, S Subramanian, M Gardner, S Singh EMNLP Demo, 2019 | 153 | 2019 |
Compositional Questions Do Not Necessitate Multi-hop Reasoning S Min*, E Wallace*, S Singh, M Gardner, H Hajishirzi, L Zettlemoyer ACL 2019, 2019 | 144 | 2019 |
Concealed Data Poisoning Attacks on NLP Models E Wallace*, TZ Zhao*, S Feng, S Singh NAACL 2021, 2020 | 143* | 2020 |