关注
Julian Martin Eisenschlos
Julian Martin Eisenschlos
NLP Researcher, Google DeepMind
在 google.com 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
8302023
TAPAS: Weakly Supervised Table Parsing via Pre-training
J Herzig, PK Nowak, T Müller, F Piccinno, JM Eisenschlos
Proceedings of ACL 2020, 2020
5292020
Time-aware language models as temporal knowledge bases
B Dhingra, JR Cole, JM Eisenschlos, D Gillick, J Eisenstein, WW Cohen
Transactions of the Association for Computational Linguistics 10, 257-273, 2022
1852022
Pix2struct: Screenshot parsing as pretraining for visual language understanding
K Lee, M Joshi, IR Turc, H Hu, F Liu, JM Eisenschlos, U Khandelwal, ...
International Conference on Machine Learning, 18893-18912, 2023
1352023
Multifit: Efficient multi-lingual language model fine-tuning
JM Eisenschlos, S Ruder, P Czapla, M Kardas, S Gugger, J Howard
Proceedings of EMNLP 2020, 2019
1082019
Understanding tables with intermediate pre-training
JM Eisenschlos, S Krichene, T Müller
Findings of EMNLP 2020, 2020
962020
Open Domain Question Answering over Tables via Dense Retrieval
J Herzig, T Müller, S Krichene, JM Eisenschlos
Proceedings of NAACL 2021, 2021
752021
MATE: Multi-view attention for table transformer efficiency
JM Eisenschlos, M Gor, T Mueller, W Cohen
Proceedings of the 2021 Conference on Empirical Methods in Natural Language …, 2021
702021
SoftSort: A Continuous Relaxation for the argsort Operator
S Prillo, JM Eisenschlos
Proceedings of ICML 2020, 2020
562020
Deplot: One-shot visual language reasoning by plot-to-table translation
F Liu, JM Eisenschlos, F Piccinno, S Krichene, C Pang, K Lee, M Joshi, ...
arXiv preprint arXiv:2212.10505, 2022
492022
Matcha: Enhancing visual language pretraining with math reasoning and chart derendering
F Liu, F Piccinno, S Krichene, C Pang, K Lee, M Joshi, Y Altun, N Collier, ...
arXiv preprint arXiv:2212.09662, 2022
432022
Fool Me Twice: Entailment from Wikipedia Gamification
JM Eisenschlos, B Dhingra, J Bulian, B Börschinger, J Boyd-Graber
Proceedings of NAACL 2021, 2021
302021
Selectively answering ambiguous questions
JR Cole, MJQ Zhang, D Gillick, JM Eisenschlos, B Dhingra, J Eisenstein
arXiv preprint arXiv:2305.14613, 2023
252023
Table-to-text generation and pre-training with tabt5
E Andrejczuk, JM Eisenschlos, F Piccinno, S Krichene, Y Altun
arXiv preprint arXiv:2210.09162, 2022
252022
Chain-of-table: Evolving tables in the reasoning chain for table understanding
Z Wang, H Zhang, CL Li, JM Eisenschlos, V Perot, Z Wang, L Miculicich, ...
arXiv preprint arXiv:2401.04398, 2024
132024
TAPAS at SemEval-2021 Task 9: Reasoning over tables with intermediate pre-training
T Müller, JM Eisenschlos, S Krichene
SemEval 2021, 2021
132021
DoT: An efficient Double Transformer for NLP tasks with tables
S Krichene, T Müller, JM Eisenschlos
Findings of ACL 2021, 2021
112021
Universal self-adaptive prompting
X Wan, R Sun, H Nakhost, H Dai, JM Eisenschlos, SO Arik, T Pfister
arXiv preprint arXiv:2305.14926, 2023
62023
Leveraging data recasting to enhance tabular reasoning
A Jena, V Gupta, M Shrivastava, JM Eisenschlos
arXiv preprint arXiv:2211.12641, 2022
52022
Do ever larger octopi still amplify reporting biases? evidence from judgments of typical colour
F Liu, JM Eisenschlos, JR Cole, N Collier
arXiv preprint arXiv:2209.12786, 2022
52022
系统目前无法执行此操作,请稍后再试。
文章 1–20