Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning JD Williams, K Asadi, G Zweig arXiv preprint arXiv:1702.03274, 2017 | 414 | 2017 |
Dive into deep learning P Chaudhari, R Fakoor, K Asadi chapter 17 on Reinforcement Learning, 2023 | 393* | 2023 |
An Alternative Softmax Operator for Reinforcement Learning K Asadi, ML Littman Proceedings of the 34th International Conference on Machine Learning, 243-252, 2017 | 240 | 2017 |
Lipschitz Continuity in Model-based Reinforcement Learning K Asadi, D Misra, ML Littman Proceedings of the 35th International Conference on Machine Learning, 2018 | 185 | 2018 |
Deepmellow: removing the need for a target network in deep Q-learning S Kim, K Asadi, M Littman, G Konidaris Proceedings of the Twenty Eighth International Joint Conference on …, 2019 | 80 | 2019 |
State abstraction as compression in apprenticeship learning D Abel, D Arumugam, K Asadi, Y Jinnai, ML Littman, LLS Wong Proceedings of the AAAI Conference on Artificial Intelligence 33, 3134-3142, 2019 | 62 | 2019 |
Combating the Compounding-Error Problem with a Multi-step Model K Asadi, D Misra, S Kim, ML Littman arXiv preprint arXiv:1905.13320, 2019 | 60 | 2019 |
Lipschitz lifelong reinforcement learning E Lecarpentier, D Abel, K Asadi, Y Jinnai, E Rachelson, ML Littman Proceedings of the AAAI Conference on Artificial Intelligence 35 (9), 8270-8278, 2021 | 42 | 2021 |
Mean Actor Critic K Asadi, C Allen, M Roderick, A Mohamed, G Konidaris, M Littman arXiv preprint arXiv:1709.00503, 2017 | 39* | 2017 |
Continuous doubly constrained batch reinforcement learning R Fakoor, J Mueller, K Asadi, P Chaudhari, AJ Smola arXiv preprint arXiv:2102.09225, 2021 | 35 | 2021 |
Deep radial-basis value functions for continuous control K Asadi, N Parikh, RE Parr, GD Konidaris, ML Littman Proceedings of the AAAI Conference on Artificial Intelligence, 2021 | 29* | 2021 |
Sample-efficient Reinforcement Learning for Dialog Control K Asadi, JD Williams arXiv preprint arXiv:1612.06000, 2016 | 25 | 2016 |
Resetting the optimizer in deep RL: An empirical study K Asadi, R Fakoor, S Sabach Advances in Neural Information Processing Systems 36, 2023 | 18 | 2023 |
Strengths, weaknesses, and combinations of model-based and model-free reinforcement learning K Asadi Department of Computing Science University of Alberta, 2015 | 15 | 2015 |
Mitigating Planner Overfitting in Model-Based Reinforcement Learning D Arumugam, D Abel, K Asadi, N Gopalan, C Grimm, JK Lee, L Lehnert, ... arXiv preprint arXiv:1812.01129, 2018 | 14 | 2018 |
Towards a Simple Approach to Multi-step Model-based Reinforcement Learning. arXiv 2018 K Asadi, E Cater, D Misra, ML Littman arXiv preprint arXiv:1811.00128, 2018 | 14 | 2018 |
TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models Z Liu, J Zhang, K Asadi, Y Liu, D Zhao, S Sabach, R Fakoor International Conference on Learning Representations, 2024 | 13 | 2024 |
Equivalence between wasserstein and value-aware model-based reinforcement learning K Asadi, E Cater, D Misra, ML Littman FAIM Workshop on Prediction and Generative Modeling in Reinforcement Learning 3, 2018 | 13* | 2018 |
Learning State Abstractions for Transfer in Continuous Control K Asadi, D Abel, ML Littman arXiv preprint arXiv:2002.05518, 2020 | 8 | 2020 |
On Welfare-Centric Fair Reinforcement Learning C Cousins, K Asadi, E Lobo, ML Littman Reinforcement Learning Conference, 2024 | 7* | 2024 |