Scalable primal-dual actor-critic method for safe multi-agent rl with general utilities
Advances in Neural Information Processing Systems, 2024•proceedings.neurips.cc
We investigate safe multi-agent reinforcement learning, where agents seek to collectively
maximize an aggregate sum of local objectives while satisfying their own safety constraints.
The objective and constraints are described by general utilities, ie, nonlinear functions of the
long-term state-action occupancy measure, which encompass broader decision-making
goals such as risk, exploration, or imitations. The exponential growth of the state-action
space size with the number of agents presents challenges for global observability, further …
maximize an aggregate sum of local objectives while satisfying their own safety constraints.
The objective and constraints are described by general utilities, ie, nonlinear functions of the
long-term state-action occupancy measure, which encompass broader decision-making
goals such as risk, exploration, or imitations. The exponential growth of the state-action
space size with the number of agents presents challenges for global observability, further …
Abstract
We investigate safe multi-agent reinforcement learning, where agents seek to collectively maximize an aggregate sum of local objectives while satisfying their own safety constraints. The objective and constraints are described by general utilities, ie, nonlinear functions of the long-term state-action occupancy measure, which encompass broader decision-making goals such as risk, exploration, or imitations. The exponential growth of the state-action space size with the number of agents presents challenges for global observability, further exacerbated by the global coupling arising from agents' safety constraints. To tackle this issue, we propose a primal-dual method utilizing shadow reward and -hop neighbor truncation under a form of correlation decay property, where is the communication radius. In the exact setting, our algorithm converges to a first-order stationary point (FOSP) at the rate of . In the sample-based setting, we demonstrate that, with high probability, our algorithm requires samples to achieve an -FOSP with an approximation error of , where . Finally, we demonstrate the effectiveness of our model through extensive numerical experiments.
proceedings.neurips.cc
以上显示的是最相近的搜索结果。 查看全部搜索结果