Why does hierarchy (sometimes) work so well in reinforcement learning? O Nachum, H Tang, X Lu, S Gu, H Lee, S Levine arXiv preprint arXiv:1909.10618, 2019 | 122 | 2019 |
Dynamics generalization via information bottleneck in deep reinforcement learning X Lu, K Lee, P Abbeel, S Tiomkin arXiv preprint arXiv:2008.00614, 2020 | 32 | 2020 |
Predictive coding for boosting deep reinforcement learning with sparse rewards X Lu, S Tiomkin, P Abbeel arXiv preprint arXiv:1912.13414, 2019 | 7 | 2019 |
Shixiang (Shane) Gu, Honglak Lee, and Sergey Levine O Nachum, H Tang, X Lu Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning, 0 | 3 | |
Generalization via Information Bottleneck in Deep Reinforcement Learning X Lu, S Tiomkin, P Abbeel Master’s thesis. University of California at Berkeley, 2020 | 1 | 2020 |