The Importance of Non-Markovianity in Maximum State Entropy Exploration M Mutti, R De Santi, M Restelli ICML 2022, 2022 | 23 | 2022 |
Challenging Common Assumptions in Convex Reinforcement Learning M Mutti, R De Santi, P De Bartolomeis, M Restelli NeurIPS 2022, 2022 | 17 | 2022 |
Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization M Mutti, R De Santi, E Rossi, JF Calderon, M Bronstein, M Restelli AAAI 2022, 2022 | 15* | 2022 |
Convex reinforcement learning in finite trials M Mutti, R De Santi, P De Bartolomeis, M Restelli JMLR 24 (250), 1-42, 2023 | 10 | 2023 |
Global Reinforcement Learning: Beyond Linear and Convex Rewards via Submodular Semi-gradient Methods R De Santi, M Prajapat, A Krause ICML 2024, 2024 | 1 | 2024 |
Non-Markovian Policies for Unsupervised Reinforcement Learning in Multiple Environments P Maldini, M Mutti, R De Santi, M Restelli ICML 2022 Workshop: First Workshop on Pre-training: Perspectives, Pitfalls …, 0 | 1* | |
Geometric Active Exploration in Markov Decision Processes: the Benefit of Abstraction R De Santi, FA Joseph, N Liniger, M Mutti, A Krause ICML 2024, 2024 | | 2024 |
Exploiting Causal Graph Priors with Posterior Sampling for Reinforcement Learning M Mutti, R De Santi, M Restelli, A Marx, G Ramponi ICLR 2024, 2023 | | 2023 |