Probabilistic recursive reasoning for multi-agent reinforcement learning
Humans are capable of attributing latent mental contents such as beliefs or intentions to
others. The social skill is critical in daily life for reasoning about the potential consequences
of others' behaviors so as to plan ahead. It is known that humans use such reasoning ability
recursively by considering what others believe about their own beliefs. In this paper, we start
from level-$1 $ recursion and introduce a probabilistic recursive reasoning (PR2) framework
for multi-agent reinforcement learning. Our hypothesis is that it is beneficial for each agent to …
others. The social skill is critical in daily life for reasoning about the potential consequences
of others' behaviors so as to plan ahead. It is known that humans use such reasoning ability
recursively by considering what others believe about their own beliefs. In this paper, we start
from level-$1 $ recursion and introduce a probabilistic recursive reasoning (PR2) framework
for multi-agent reinforcement learning. Our hypothesis is that it is beneficial for each agent to …
Humans are capable of attributing latent mental contents such as beliefs or intentions to others. The social skill is critical in daily life for reasoning about the potential consequences of others' behaviors so as to plan ahead. It is known that humans use such reasoning ability recursively by considering what others believe about their own beliefs. In this paper, we start from level- recursion and introduce a probabilistic recursive reasoning (PR2) framework for multi-agent reinforcement learning. Our hypothesis is that it is beneficial for each agent to account for how the opponents would react to its future behaviors. Under the PR2 framework, we adopt variational Bayes methods to approximate the opponents' conditional policies, to which each agent finds the best response and then improve their own policies. We develop decentralized-training-decentralized-execution algorithms, namely PR2-Q and PR2-Actor-Critic, that are proved to converge in the self-play scenarios when there exists one Nash equilibrium. Our methods are tested on both the matrix game and the differential game, which have a non-trivial equilibrium where common gradient-based methods fail to converge. Our experiments show that it is critical to reason about how the opponents believe about what the agent believes. We expect our work to contribute a new idea of modeling the opponents to the multi-agent reinforcement learning community.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果