Separate to adapt: Open set domain adaptation via progressive separation H Liu, Z Cao, M Long, J Wang, Q Yang Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2019 | 320 | 2019 |
Transferable adversarial training: A general approach to adapting deep classifiers H Liu, M Long, J Wang, M Jordan International Conference on Machine Learning, 4013-4022, 2019 | 290 | 2019 |
Cycle Self-Training for Domain Adaptation H Liu, J Wang, M Long Advances in Neural Information Processing Systems 34, 2021 | 167 | 2021 |
Self-supervised Learning is More Robust to Dataset Imbalance H Liu, JZ HaoChen, A Gaidon, T Ma International Conference on Learning Representations, 2022 | 153 | 2022 |
Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training H Liu, Z Li, D Hall, P Liang, T Ma International Conference on Learning Representations, 2024 | 75 | 2024 |
Learning to Adapt to Evolving Domains H Liu, M Long, J Wang, Y Wang Advances in Neural Information Processing Systems 33, 2020 | 68 | 2020 |
Towards Understanding the Transferability of Deep Representations H Liu, M Long, J Wang, MI Jordan arXiv preprint arXiv:1909.12031, 2019 | 31 | 2019 |
Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models H Liu, SM Xie, Z Li, T Ma International Conference on Machine Learning, 2023 | 28 | 2023 |
Chain of Thought Empowers Transformers to Solve Inherently Serial Problems Z Li, H Liu, D Zhou, T Ma International Conference on Learning Representations, 2024 | 7 | 2024 |
Meta-learning Transferable Representations with a Single Target Domain H Liu, JZ HaoChen, C Wei, T Ma arXiv preprint arXiv:2011.01418, 2020 | 7 | 2020 |