关注
Léo GRINSZTAJN
Léo GRINSZTAJN
PhD Student, INRIA
在 inria.fr 的电子邮件经过验证
标题
引用次数
引用次数
年份
Why do tree-based models still outperform deep learning on typical tabular data?
L Grinsztajn, E Oyallon, G Varoquaux
Advances in neural information processing systems 35, 507-520, 2022
9402022
Bayesian workflow for disease transmission modeling in Stan
L Grinsztajn, E Semenova, CC Margossian, J Riou
Statistics in medicine 40 (27), 6209-6234, 2021
542021
Interpreting neural networks through the polytope lens
S Black, L Sharkey, L Grinsztajn, E Winsor, D Braun, J Merizian, K Parker, ...
arXiv preprint arXiv:2211.12312, 2022
152022
MetFlow: a new efficient method for bridging the gap between Markov chain Monte Carlo and variational inference
A Thin, N Kotelevskii, JS Denain, L Grinsztajn, A Durmus, M Panov, ...
arXiv preprint arXiv:2002.12253, 2020
152020
CARTE: pretraining and transfer for tabular learning
MJ Kim, L Grinsztajn, G Varoquaux
arXiv preprint arXiv:2402.16785, 2024
12024
Better by Default: Strong Pre-Tuned MLPs and Boosted Trees on Tabular Data
D Holzmüller, L Grinsztajn, I Steinwart
arXiv preprint arXiv:2407.04491, 2024
2024
Vectorizing string entries for data processing on tables: when are larger language models better?
L Grinsztajn, E Oyallon, MJ Kim, G Varoquaux
arXiv preprint arXiv:2312.09634, 2023
2023
Modeling string entries for tabular data prediction: do we need big large language models?
L Grinsztajn, MJ Kim, E Oyallon, G Varoquaux
NeurIPS 2023 Second Table Representation Learning Workshop, 0
Attributing Mode Collapse in the Fine-Tuning of Large Language Models
L O’Mahony, L Grinsztajn, H Schoelkopf, S Biderman
系统目前无法执行此操作,请稍后再试。
文章 1–9