Alora: Allocating low-rank adaptation for fine-tuning large language models
Parameter-efficient fine-tuning (PEFT) is widely studied for its effectiveness and efficiency in
the era of large language models. Low-rank adaptation (LoRA) has demonstrated …
the era of large language models. Low-rank adaptation (LoRA) has demonstrated …
Overview of the promptCBLUE shared task in CHIP2023
This paper presents an overview of the PromptCBLUE shared task (http://cips-chip. org.
cn/2023/eval1) held in the CHIP-2023 Conference. This shared task reformulates the …
cn/2023/eval1) held in the CHIP-2023 Conference. This shared task reformulates the …
ECNU-LLM@ CHIP-PromptCBLUE: Prompt Optimization and In-Context Learning for Chinese Medical Tasks
H Zheng, M Guan, Y Mei, Y Li, Y Wu - China Health Information Processing …, 2023 - Springer
Our team, ECNU-LLM, presents a method of in-context learning for enhancing the
performance of large language models without fine-tuning in the 9th China Health …
performance of large language models without fine-tuning in the 9th China Health …
Check for Overview of the PromptCBLUE Shared Task in CHIP2023 Wei Zhu¹ (), Xiaoling Wang¹, Mosha Chen², and Buzhou Tang³ 1 East China Normal University …
3 Harbin Institute of Technology, Shenzhen, China Abstract. This paper presents an
overview of the PromptCBLUE shared task (http://cips-chip. org. cn/2023/eval1) held in the …
overview of the PromptCBLUE shared task (http://cips-chip. org. cn/2023/eval1) held in the …
Check for updates ECNU-LLM@ CHIP-PromptCBLUE: Prompt Optimization and In-Context Learning for Chinese Medical Tasks
H Zheng, M Guan, Y Mei, Y Li… - … : Evaluation Track Papers …, 2024 - books.google.com
Our team, ECNU-LLM, presents a method of in-context learning for enhancing the
performance of large language models without fine-tuning in the 9th China Health …
performance of large language models without fine-tuning in the 9th China Health …