Parameter-efficient fine-tuning for large models: A comprehensive survey

Z Han, C Gao, J Liu, J Zhang, SQ Zhang - arXiv preprint arXiv:2403.14608, 2024 - arxiv.org
Large models represent a groundbreaking advancement in multiple application fields,
enabling remarkable achievements across various tasks. However, their unprecedented …

Federated full-parameter tuning of billion-sized language models with communication cost under 18 kilobytes

Z Qin, D Chen, B Qian, B Ding, Y Li, S Deng - arXiv preprint arXiv …, 2023 - arxiv.org
Pre-trained large language models (LLMs) require fine-tuning to improve their
responsiveness to natural language instructions. Federated learning (FL) offers a way to …

Empirical guidelines for deploying llms onto resource-constrained edge devices

R Qin, D Liu, C Xu, Z Yan, Z Tan, Z Jia… - arXiv preprint arXiv …, 2024 - arxiv.org
The scaling laws have become the de facto guidelines for designing large language models
(LLMs), but they were studied under the assumption of unlimited computing resources for …

Low-rank adaptation of large language model rescoring for parameter-efficient speech recognition

Y Yu, CHH Yang, J Kolehmainen… - 2023 IEEE Automatic …, 2023 - ieeexplore.ieee.org
We propose a neural language modeling system based on low-rank adaptation (LoRA) for
speech recognition output rescoring. Although pretrained language models (LMs) like BERT …

Llm-mars: Large language model for behavior tree generation and nlp-enhanced dialogue in multi-agent robot systems

A Lykov, M Dronova, N Naglov, M Litvinov… - arXiv preprint arXiv …, 2023 - arxiv.org
This paper introduces LLM-MARS, first technology that utilizes a Large Language Model
based Artificial Intelligence for Multi-Agent Robot Systems. LLM-MARS enables dynamic …

Semantic are Beacons: A Semantic Perspective for Unveiling Parameter-Efficient Fine-Tuning in Knowledge Learning

R Wang, P Li - arXiv preprint arXiv:2405.18292, 2024 - arxiv.org
Parameter-Efficient Fine-Tuning (PEFT) methods enable efficient adaptation of Large
Language Models (LLMs) to various downstream applications. However, the effectiveness of …

Deeper Insights Without Updates: The Power of In-Context Learning Over Fine-Tuning

Q Yin, X He, L Deng, CT Leong, F Wang, Y Yan… - arXiv preprint arXiv …, 2024 - arxiv.org
Fine-tuning and in-context learning (ICL) are two prevalent methods in imbuing large
language models with task-specific knowledge. It is commonly believed that fine-tuning can …

Tuning a SAM-Based Model With Multi-Cognitive Visual Adapter to Remote Sensing Instance Segmentation

L Zheng, X Pu, F Xu - IEEE Journal of Selected Topics in …, 2024 - ieeexplore.ieee.org
The Segment Anything Model (SAM), a foundational model designed for promptable
segmentation tasks, demonstrates exceptional generalization capabilities, making it highly …

Fine-tuning and deploying large language models over edges: Issues and approaches

Y Dong, H Zhang, C Li, S Guo, V Leung… - arXiv preprint arXiv …, 2024 - arxiv.org
Since the invention of GPT2--1.5 B in 2019, large language models (LLMs) have
transitioned from specialized models to versatile foundation models. The LLMs exhibit …

[HTML][HTML] Federated and edge learning for large language models

F Piccialli, D Chiaro, P Qi, V Bellandi, E Damiani - Information Fusion, 2024 - Elsevier
As the demand for sophisticated language models (LMs) continues to grow, the necessity to
deploy them efficiently across federated and edge environments becomes increasingly …