Deep exploration via randomized value functions
We study the use of randomized value functions to guide deep exploration in reinforcement
learning. This offers an elegant means for synthesizing statistically and computationally
efficient exploration with common practical approaches to value function learning. We
present several reinforcement learning algorithms that leverage randomized value functions
and demonstrate their efficacy through computational studies. We also prove a regret bound
that establishes statistical efficiency with a tabular representation.
learning. This offers an elegant means for synthesizing statistically and computationally
efficient exploration with common practical approaches to value function learning. We
present several reinforcement learning algorithms that leverage randomized value functions
and demonstrate their efficacy through computational studies. We also prove a regret bound
that establishes statistical efficiency with a tabular representation.
Abstract
We study the use of randomized value functions to guide deep exploration in reinforcement learning. This offers an elegant means for synthesizing statistically and computationally efficient exploration with common practical approaches to value function learning. We present several reinforcement learning algorithms that leverage randomized value functions and demonstrate their efficacy through computational studies. We also prove a regret bound that establishes statistical efficiency with a tabular representation.
jmlr.org
以上显示的是最相近的搜索结果。 查看全部搜索结果