Bayesian design principles for frequentist sequential learning
We develop a general theory to optimize the frequentist regret for sequential learning
problems, where efficient bandit and reinforcement learning algorithms can be derived from …
problems, where efficient bandit and reinforcement learning algorithms can be derived from …
Improved Bayesian regret bounds for Thompson sampling in reinforcement learning
A Moradipari, M Pedramfar… - Advances in …, 2023 - proceedings.neurips.cc
In this paper, we prove state-of-the-art Bayesian regret bounds for Thompson Sampling in
reinforcement learning in a multitude of settings. We present a refined analysis of the …
reinforcement learning in a multitude of settings. We present a refined analysis of the …
Linear partial monitoring for sequential decision making: Algorithms, regret bounds and applications
Partial monitoring is an expressive framework for sequential decision-making with an
abundance of applications, including graph-structured and dueling bandits, dynamic pricing …
abundance of applications, including graph-structured and dueling bandits, dynamic pricing …
Deciding what to model: Value-equivalent sampling for reinforcement learning
D Arumugam, B Van Roy - Advances in neural information …, 2022 - proceedings.neurips.cc
The quintessential model-based reinforcement-learning agent iteratively refines its
estimates or prior beliefs about the true underlying model of the environment. Recent …
estimates or prior beliefs about the true underlying model of the environment. Recent …
Leveraging demonstrations to improve online learning: Quality matters
We investigate the extent to which offline demonstration data can improve online learning. It
is natural to expect some improvement, but the question is how, and by how much? We …
is natural to expect some improvement, but the question is how, and by how much? We …
Value of Information and Reward Specification in Active Inference and POMDPs
R Wei - arXiv preprint arXiv:2408.06542, 2024 - arxiv.org
Expected free energy (EFE) is a central quantity in active inference which has recently
gained popularity due to its intuitive decomposition of the expected value of control into a …
gained popularity due to its intuitive decomposition of the expected value of control into a …
Bayesian reinforcement learning with limited cognitive load
All biological and artificial agents must act given limits on their ability to acquire and process
information. As such, a general theory of adaptive behavior should be able to account for the …
information. As such, a general theory of adaptive behavior should be able to account for the …
Probabilistic inference in reinforcement learning done right
J Tarbouriech, T Lattimore… - Advances in Neural …, 2024 - proceedings.neurips.cc
A popular perspective in Reinforcement learning (RL) casts the problem as probabilistic
inference on a graphical model of the Markov decision process (MDP). The core object of …
inference on a graphical model of the Markov decision process (MDP). The core object of …
Steering: Stein information directed exploration for model-based reinforcement learning
Directed Exploration is a crucial challenge in reinforcement learning (RL), especially when
rewards are sparse. Information-directed sampling (IDS), which optimizes the information …
rewards are sparse. Information-directed sampling (IDS), which optimizes the information …
Dynamic Online Recommendation for Two-Sided Market with Bayesian Incentive Compatibility
Recommender systems play a crucial role in internet economies by connecting users with
relevant products or services. However, designing effective recommender systems faces two …
relevant products or services. However, designing effective recommender systems faces two …