关注
Robin Staab
Robin Staab
Student at ETH Zurich
在 inf.ethz.ch 的电子邮件经过验证
标题
引用次数
引用次数
年份
Bayesian framework for gradient leakage
M Balunović, DI Dimitrov, R Staab, M Vechev
arXiv preprint arXiv:2111.04706, 2021
472021
Beyond memorization: Violating privacy via inference with large language models
R Staab, M Vero, M Balunović, M Vechev
arXiv preprint arXiv:2310.07298, 2023
462023
Effective certification of monotone deep equilibrium models
MN Müller, R Staab, M Fischer, MT Vechev
CoRR, abs/2110.08260, 2021
62021
Watermark stealing in large language models
N Jovanović, R Staab, M Vechev
arXiv preprint arXiv:2402.19361, 2024
52024
Abstract interpretation of fixpoint iterators with applications to neural networks
MN Müller, M Fischer, R Staab, M Vechev
Proceedings of the ACM on Programming Languages 7 (PLDI), 786-810, 2023
42023
Large language models are advanced anonymizers
R Staab, M Vero, M Balunović, M Vechev
arXiv preprint arXiv:2402.13846, 2024
32024
Private Attribute Inference from Images with Vision-Language Models
B Tömekçe, M Vero, R Staab, M Vechev
arXiv preprint arXiv:2404.10618, 2024
12024
From Principle to Practice: Vertical Data Minimization for Machine Learning
R Staab, N Jovanović, M Balunović, M Vechev
arXiv preprint arXiv:2311.10500, 2023
12023
A Synthetic Dataset for Personal Attribute Inference
H Yukhymenko, R Staab, M Vero, M Vechev
arXiv preprint arXiv:2406.07217, 2024
2024
Exploiting LLM Quantization
K Egashira, M Vero, R Staab, J He, M Vechev
arXiv preprint arXiv:2405.18137, 2024
2024
Back to the Drawing Board for Fair Representation Learning
A Pouget, N Jovanović, M Vero, R Staab, M Vechev
arXiv preprint arXiv:2405.18161, 2024
2024
Black-Box Detection of Language Model Watermarks
T Gloaguen, N Jovanovic, R Staab, M Vechev
Large Language Models are Anonymizers
R Staab, M Vero, M Balunovic, M Vechev
ICLR 2024 Workshop on Reliable and Responsible Foundation Models, 0
系统目前无法执行此操作,请稍后再试。
文章 1–13