A simple unified framework for detecting out-of-distribution samples and adversarial attacks K Lee, K Lee, H Lee, J Shin Advances in neural information processing systems 31, 2018 | 1949 | 2018 |
Decision transformer: Reinforcement learning via sequence modeling L Chen, K Lu, A Rajeswaran, K Lee, A Grover, M Laskin, P Abbeel, ... Advances in neural information processing systems, 2021 | 1303 | 2021 |
Training confidence-calibrated classifiers for detecting out-of-distribution samples K Lee, H Lee, K Lee, J Shin International Conference on Learning Representations, 2017 | 971 | 2017 |
Using pre-training can improve model robustness and uncertainty D Hendrycks, K Lee, M Mazeika International conference on machine learning, 2712-2721, 2019 | 777 | 2019 |
Reinforcement learning with augmented data M Laskin, K Lee, A Stooke, L Pinto, P Abbeel, A Srinivas Advances in neural information processing systems, 2020 | 659 | 2020 |
Decoupling representation learning from reinforcement learning A Stooke, K Lee, P Abbeel, M Laskin International conference on machine learning, 9870-9879, 2021 | 326 | 2021 |
Regularizing class-wise predictions via self-knowledge distillation S Yun, J Park, K Lee, J Shin Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020 | 303 | 2020 |
Sunrise: A simple unified framework for ensemble learning in deep reinforcement learning K Lee, M Laskin, A Srinivas, P Abbeel International Conference on Machine Learning, 6131-6141, 2021 | 231 | 2021 |
Overcoming catastrophic forgetting with unlabeled data in the wild K Lee, K Lee, J Shin, H Lee Proceedings of the IEEE/CVF International Conference on Computer Vision, 312-321, 2019 | 220 | 2019 |
PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-training K Lee, L Smith, P Abbeel International Conference on Machine Learning, 2021 | 209 | 2021 |
Network randomization: A simple technique for generalization in deep reinforcement learning K Lee, K Lee, J Shin, H Lee International Conference on Learning Representations, 2019 | 198 | 2019 |
Offline-to-Online Reinforcement Learning via Balanced Replay and Pessimistic Q-Ensemble S Lee, Y Seo, K Lee, P Abbeel, J Shin Annual Conference on Robot Learning, 2021 | 150 | 2021 |
Robust inference via generative classifiers for handling noisy labels K Lee, S Yun, K Lee, H Lee, B Li, J Shin International conference on machine learning, 3763-3772, 2019 | 142 | 2019 |
Aligning text-to-image models using human feedback K Lee, H Liu, M Ryu, O Watkins, Y Du, C Boutilier, P Abbeel, ... arXiv preprint arXiv:2302.12192, 2023 | 133 | 2023 |
URLB: Unsupervised reinforcement learning benchmark M Laskin, D Yarats, H Liu, K Lee, A Zhan, K Lu, C Cang, L Pinto, P Abbeel Conference on Neural Information Processing Systems Datasets and Benchmarks …, 2021 | 123 | 2021 |
Context-aware dynamics model for generalization in model-based reinforcement learning K Lee, Y Seo, S Lee, H Lee, J Shin International Conference on Machine Learning, 5757-5766, 2020 | 119 | 2020 |
State entropy maximization with random encoders for efficient exploration Y Seo, L Chen, J Shin, H Lee, P Abbeel, K Lee International Conference on Machine Learning, 2021 | 114 | 2021 |
Masked world models for visual control Y Seo, D Hafner, H Liu, F Liu, S James, K Lee, P Abbeel Conference on Robot Learning, 1332-1344, 2023 | 90 | 2023 |
Reinforcement learning with action-free pre-training from videos Y Seo, K Lee, SL James, P Abbeel International Conference on Machine Learning, 19561-19579, 2022 | 89 | 2022 |
B-Pref: Benchmarking Preference-Based Reinforcement Learning K Lee, L Smith, A Dragan, P Abbeel Conference on Neural Information Processing Systems Datasets and Benchmarks …, 2021 | 86 | 2021 |