Alora: Allocating low-rank adaptation for fine-tuning large language models

Z Liu, J Lyn, W Zhu, X Tian, Y Graham - arXiv preprint arXiv:2403.16187, 2024 - arxiv.org
Parameter-efficient fine-tuning (PEFT) is widely studied for its effectiveness and efficiency in
the era of large language models. Low-rank adaptation (LoRA) has demonstrated …

Overview of the promptCBLUE shared task in CHIP2023

W Zhu, X Wang, M Chen, B Tang - China Health Information Processing …, 2023 - Springer
This paper presents an overview of the PromptCBLUE shared task (http://cips-chip. org.
cn/2023/eval1) held in the CHIP-2023 Conference. This shared task reformulates the …

ECNU-LLM@ CHIP-PromptCBLUE: Prompt Optimization and In-Context Learning for Chinese Medical Tasks

H Zheng, M Guan, Y Mei, Y Li, Y Wu - China Health Information Processing …, 2023 - Springer
Our team, ECNU-LLM, presents a method of in-context learning for enhancing the
performance of large language models without fine-tuning in the 9th China Health …

Check for Overview of the PromptCBLUE Shared Task in CHIP2023 Wei Zhu¹ (), Xiaoling Wang¹, Mosha Chen², and Buzhou Tang³ 1 East China Normal University …

W Zhu¹, M Chen, B Tang - Health Information Processing …, 2024 - books.google.com
3 Harbin Institute of Technology, Shenzhen, China Abstract. This paper presents an
overview of the PromptCBLUE shared task (http://cips-chip. org. cn/2023/eval1) held in the …

Check for updates ECNU-LLM@ CHIP-PromptCBLUE: Prompt Optimization and In-Context Learning for Chinese Medical Tasks

H Zheng, M Guan, Y Mei, Y Li… - … : Evaluation Track Papers …, 2024 - books.google.com
Our team, ECNU-LLM, presents a method of in-context learning for enhancing the
performance of large language models without fine-tuning in the 9th China Health …