Adversarial option-aware hierarchical imitation learning

M Jing, W Huang, F Sun, X Ma… - International …, 2021 - proceedings.mlr.press
International Conference on Machine Learning, 2021proceedings.mlr.press
It has been a challenge to learning skills for an agent from long-horizon unannotated
demonstrations. Existing approaches like Hierarchical Imitation Learning (HIL) are prone to
compounding errors or suboptimal solutions. In this paper, we propose Option-GAIL, a novel
method to learn skills at long horizon. The key idea of Option-GAIL is modeling the task
hierarchy by options and train the policy via generative adversarial optimization. In
particular, we propose an Expectation-Maximization (EM)-style algorithm: an E-step that …
Abstract
It has been a challenge to learning skills for an agent from long-horizon unannotated demonstrations. Existing approaches like Hierarchical Imitation Learning (HIL) are prone to compounding errors or suboptimal solutions. In this paper, we propose Option-GAIL, a novel method to learn skills at long horizon. The key idea of Option-GAIL is modeling the task hierarchy by options and train the policy via generative adversarial optimization. In particular, we propose an Expectation-Maximization (EM)-style algorithm: an E-step that samples the options of expert conditioned on the current learned policy, and an M-step that updates the low-and high-level policies of agent simultaneously to minimize the newly proposed option-occupancy measurement between the expert and the agent. We theoretically prove the convergence of the proposed algorithm. Experiments show that Option-GAIL outperforms other counterparts consistently across a variety of tasks.
proceedings.mlr.press
以上显示的是最相近的搜索结果。 查看全部搜索结果