How far can camels go? exploring the state of instruction tuning on open resources Y Wang, H Ivison, P Dasigi, J Hessel, T Khot, K Chandu, D Wadden, ... Advances in Neural Information Processing Systems 36, 74764-74786, 2023 | 187 | 2023 |
Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2 H Ivison, Y Wang, V Pyatkin, N Lambert, M Peters, P Dasigi, J Jang, ... arXiv preprint arXiv:2311.10702, 2023 | 83 | 2023 |
OLMo: Accelerating the Science of Language Models D Groeneveld, I Beltagy, P Walsh, A Bhagia, R Kinney, O Tafjord, AH Jha, ... arXiv preprint arXiv:2402.00838, 2024 | 31 | 2024 |
Local interpretations for explainable natural language processing: A survey S Luo, H Ivison, SC Han, J Poon ACM Computing Surveys 56 (9), 1-36, 2024 | 30 | 2024 |
HINT: Hypernetwork Instruction Tuning for Efficient Zero-and Few-Shot Generalisation H Ivison, A Bhagia, Y Wang, H Hajishirzi, ME Peters Proceedings of the 61st Annual Meeting of the Association for Computational …, 2023 | 17 | 2023 |
Hyperdecoders: Instance-specific decoders for multi-task NLP H Ivison, ME Peters arXiv preprint arXiv:2203.08304, 2022 | 15 | 2022 |
Tess: Text-to-text self-conditioned simplex diffusion RK Mahabadi, H Ivison, J Tae, J Henderson, I Beltagy, ME Peters, ... arXiv preprint arXiv:2305.08379, 2023 | 13 | 2023 |
Data-Efficient Finetuning Using Cross-Task Nearest Neighbors H Ivison, NA Smith, H Hajishirzi, P Dasigi arXiv preprint arXiv:2212.00196, 2022 | 11 | 2022 |
Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback H Ivison, Y Wang, J Liu, Z Wu, V Pyatkin, N Lambert, NA Smith, Y Choi, ... arXiv preprint arXiv:2406.09279, 2024 | 1 | 2024 |
Would you like fries with that? Modular Multi-hop Reasoning HJ IVISON The University of Sydney Australia 18, 2020 | | 2020 |
Backtracking Mathematical Reasoning of Language Models to the Pretraining Data Y Razeghi, H Ivison, S Singh, Y Elazar The Second Tiny Papers Track at ICLR 2024, 0 | | |