关注
Riccardo De Santi
Riccardo De Santi
ETH AI Center
在 ethz.ch 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
The Importance of Non-Markovianity in Maximum State Entropy Exploration
M Mutti, R De Santi, M Restelli
ICML 2022, 2022
232022
Challenging Common Assumptions in Convex Reinforcement Learning
M Mutti, R De Santi, P De Bartolomeis, M Restelli
NeurIPS 2022, 2022
172022
Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization
M Mutti, R De Santi, E Rossi, JF Calderon, M Bronstein, M Restelli
AAAI 2022, 2022
15*2022
Convex reinforcement learning in finite trials
M Mutti, R De Santi, P De Bartolomeis, M Restelli
JMLR 24 (250), 1-42, 2023
102023
Global Reinforcement Learning: Beyond Linear and Convex Rewards via Submodular Semi-gradient Methods
R De Santi, M Prajapat, A Krause
ICML 2024, 2024
12024
Non-Markovian Policies for Unsupervised Reinforcement Learning in Multiple Environments
P Maldini, M Mutti, R De Santi, M Restelli
ICML 2022 Workshop: First Workshop on Pre-training: Perspectives, Pitfalls …, 0
1*
Geometric Active Exploration in Markov Decision Processes: the Benefit of Abstraction
R De Santi, FA Joseph, N Liniger, M Mutti, A Krause
ICML 2024, 2024
2024
Exploiting Causal Graph Priors with Posterior Sampling for Reinforcement Learning
M Mutti, R De Santi, M Restelli, A Marx, G Ramponi
ICLR 2024, 2023
2023
系统目前无法执行此操作,请稍后再试。
文章 1–8