关注
Chenliang Li
Chenliang Li
在 tamu.edu 的电子邮件经过验证
标题
引用次数
引用次数
年份
Maximum-likelihood inverse reinforcement learning with finite-time guarantees
S Zeng, C Li, A Garcia, M Hong
Advances in Neural Information Processing Systems 35, 10122-10135, 2022
252022
Understanding expertise through demonstrations: A maximum likelihood framework for offline inverse reinforcement learning
S Zeng, C Li, A Garcia, M Hong
arXiv preprint arXiv:2302.07457, 2023
82023
A bayesian approach to robust inverse reinforcement learning
R Wei, S Zeng, C Li, A Garcia, AD McDonald, M Hong
Conference on Robot Learning, 2304-2322, 2023
32023
When Demonstrations meet Generative World Models: A Maximum Likelihood Framework for Offline Inverse Reinforcement Learning
S Zeng, C Li, A Garcia, M Hong
Advances in Neural Information Processing Systems 36, 2024
22024
Robust inverse reinforcement learning through bayesian theory of mind
R Wei, S Zeng, C Li, A Garcia, A McDonald, M Hong
First Workshop on Theory of Mind in Communicating Agents, 2023
12023
Joint Demonstration and Preference Learning Improves Policy Alignment with Human Feedback
C Li, S Zeng, Z Liao, J Li, D Kang, A Garcia, M Hong
arXiv preprint arXiv:2406.06874, 2024
2024
Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment
J Li, S Zeng, HT Wai, C Li, A Garcia, M Hong
arXiv preprint arXiv:2405.17888, 2024
2024
Transformer Based Approach for Wireless Resource Allocation Problems Involving Mixed Discrete and Continuous Variables
B Song, Z Zhou, C Li, D Guo, X Fu, M Hong
2023 IEEE 24th International Workshop on Signal Processing Advances in …, 2023
2023
系统目前无法执行此操作,请稍后再试。
文章 1–8