Exponentially fast convergence to (strict) equilibrium via hedging

J Cohen, A Héliou, P Mertikopoulos - arXiv preprint arXiv:1607.08863, 2016 - arxiv.org
arXiv preprint arXiv:1607.08863, 2016arxiv.org
Motivated by applications to data networks where fast convergence is essential, we analyze
the problem of learning in generic N-person games that admit a Nash equilibrium in pure
strategies. Specifically, we consider a scenario where players interact repeatedly and try to
learn from past experience by small adjustments based on local-and possibly imperfect-
payoff information. For concreteness, we focus on the so-called" hedge" variant of the
exponential weights algorithm where players select an action with probability proportional to …
Motivated by applications to data networks where fast convergence is essential, we analyze the problem of learning in generic N-person games that admit a Nash equilibrium in pure strategies. Specifically, we consider a scenario where players interact repeatedly and try to learn from past experience by small adjustments based on local - and possibly imperfect - payoff information. For concreteness, we focus on the so-called "hedge" variant of the exponential weights algorithm where players select an action with probability proportional to the exponential of the action's cumulative payoff over time. When players have perfect information on their mixed payoffs, the algorithm converges locally to a strict equilibrium and the rate of convergence is exponentially fast - of the order of where is a constant and is the algorithm's step-size. In the presence of uncertainty, convergence requires a more conservative step-size policy, but with high probability, the algorithm remains locally convergent and achieves an exponential convergence rate.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果