Sceneformer: Indoor scene generation with transformers X Wang, C Yeshwanth, M Nießner International Conference on 3D Vision 2021(3DV 2021), 2021 | 140 | 2021 |
" My Answer is C": First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models X Wang, B Ma, C Hu, L Weber-Genzel, P Röttger, F Kreuter, D Hovy, ... ACL 2024 Findings, 2024 | 21 | 2024 |
ACTOR: Active Learning with Annotator-specific Classification Heads to Embrace Human Label Variation X Wang, B Plank EMNLP 2023 main, 2023 | 7 | 2023 |
How to Distill your BERT: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives X Wang, L Weissweiler, H Schütze, B Plank ACL 2023 main, 2023 | 6 | 2023 |
Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think X Wang, C Hu, B Ma, P Röttger, B Plank COLM 2024, 2024 | 3 | 2024 |
FinerCut: Finer-grained Interpretable Layer Pruning for Large Language Models Y Zhang, Y Li, X Wang, Q Shen, B Plank, B Bischl, M Rezaei, ... Compression Workshop @ NeurIPS 2024, 2024 | 2 | 2024 |
The Potential and Challenges of Evaluating Attitudes, Opinions, and Values in Large Language Models B Ma, X Wang, T Hu, AC Haensch, MA Hedderich, B Plank, F Kreuter EMNLP 2024 Findings, 2024 | 1 | 2024 |
Understanding When Tree of Thoughts Succeeds: Larger Models Excel in Generation, Not Discrimination Q Chen, X Wang, P Mondorf, MA Hedderich, B Plank arXiv preprint arXiv:2410.17820, 2024 | | 2024 |
Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation X Wang, C Hu, P Röttger, B Plank arXiv preprint arXiv:2410.03415, 2024 | | 2024 |
" Seeing the Big through the Small": Can LLMs Approximate Human Judgment Distributions on NLI from a Few Explanations? B Chen, X Wang, S Peng, R Litschko, A Korhonen, B Plank EMNLP 2024 Findings, 2024 | | 2024 |