关注
Xinpeng Wang
Xinpeng Wang
PhD Student, LMU Munich
在 cis.lmu.de 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Sceneformer: Indoor scene generation with transformers
X Wang, C Yeshwanth, M Nießner
International Conference on 3D Vision 2021(3DV 2021), 2021
1402021
" My Answer is C": First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models
X Wang, B Ma, C Hu, L Weber-Genzel, P Röttger, F Kreuter, D Hovy, ...
ACL 2024 Findings, 2024
212024
ACTOR: Active Learning with Annotator-specific Classification Heads to Embrace Human Label Variation
X Wang, B Plank
EMNLP 2023 main, 2023
72023
How to Distill your BERT: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives
X Wang, L Weissweiler, H Schütze, B Plank
ACL 2023 main, 2023
62023
Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think
X Wang, C Hu, B Ma, P Röttger, B Plank
COLM 2024, 2024
32024
FinerCut: Finer-grained Interpretable Layer Pruning for Large Language Models
Y Zhang, Y Li, X Wang, Q Shen, B Plank, B Bischl, M Rezaei, ...
Compression Workshop @ NeurIPS 2024, 2024
22024
The Potential and Challenges of Evaluating Attitudes, Opinions, and Values in Large Language Models
B Ma, X Wang, T Hu, AC Haensch, MA Hedderich, B Plank, F Kreuter
EMNLP 2024 Findings, 2024
12024
Understanding When Tree of Thoughts Succeeds: Larger Models Excel in Generation, Not Discrimination
Q Chen, X Wang, P Mondorf, MA Hedderich, B Plank
arXiv preprint arXiv:2410.17820, 2024
2024
Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation
X Wang, C Hu, P Röttger, B Plank
arXiv preprint arXiv:2410.03415, 2024
2024
" Seeing the Big through the Small": Can LLMs Approximate Human Judgment Distributions on NLI from a Few Explanations?
B Chen, X Wang, S Peng, R Litschko, A Korhonen, B Plank
EMNLP 2024 Findings, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–10