Parameter-efficient fine-tuning for large models: A comprehensive survey
Large models represent a groundbreaking advancement in multiple application fields,
enabling remarkable achievements across various tasks. However, their unprecedented …
enabling remarkable achievements across various tasks. However, their unprecedented …
Federated full-parameter tuning of billion-sized language models with communication cost under 18 kilobytes
Pre-trained large language models (LLMs) require fine-tuning to improve their
responsiveness to natural language instructions. Federated learning (FL) offers a way to …
responsiveness to natural language instructions. Federated learning (FL) offers a way to …
Empirical guidelines for deploying llms onto resource-constrained edge devices
The scaling laws have become the de facto guidelines for designing large language models
(LLMs), but they were studied under the assumption of unlimited computing resources for …
(LLMs), but they were studied under the assumption of unlimited computing resources for …
Low-rank adaptation of large language model rescoring for parameter-efficient speech recognition
We propose a neural language modeling system based on low-rank adaptation (LoRA) for
speech recognition output rescoring. Although pretrained language models (LMs) like BERT …
speech recognition output rescoring. Although pretrained language models (LMs) like BERT …
Llm-mars: Large language model for behavior tree generation and nlp-enhanced dialogue in multi-agent robot systems
A Lykov, M Dronova, N Naglov, M Litvinov… - arXiv preprint arXiv …, 2023 - arxiv.org
This paper introduces LLM-MARS, first technology that utilizes a Large Language Model
based Artificial Intelligence for Multi-Agent Robot Systems. LLM-MARS enables dynamic …
based Artificial Intelligence for Multi-Agent Robot Systems. LLM-MARS enables dynamic …
Semantic are Beacons: A Semantic Perspective for Unveiling Parameter-Efficient Fine-Tuning in Knowledge Learning
Parameter-Efficient Fine-Tuning (PEFT) methods enable efficient adaptation of Large
Language Models (LLMs) to various downstream applications. However, the effectiveness of …
Language Models (LLMs) to various downstream applications. However, the effectiveness of …
Deeper Insights Without Updates: The Power of In-Context Learning Over Fine-Tuning
Fine-tuning and in-context learning (ICL) are two prevalent methods in imbuing large
language models with task-specific knowledge. It is commonly believed that fine-tuning can …
language models with task-specific knowledge. It is commonly believed that fine-tuning can …
Tuning a SAM-Based Model With Multi-Cognitive Visual Adapter to Remote Sensing Instance Segmentation
The Segment Anything Model (SAM), a foundational model designed for promptable
segmentation tasks, demonstrates exceptional generalization capabilities, making it highly …
segmentation tasks, demonstrates exceptional generalization capabilities, making it highly …
Fine-tuning and deploying large language models over edges: Issues and approaches
Since the invention of GPT2--1.5 B in 2019, large language models (LLMs) have
transitioned from specialized models to versatile foundation models. The LLMs exhibit …
transitioned from specialized models to versatile foundation models. The LLMs exhibit …
[HTML][HTML] Federated and edge learning for large language models
As the demand for sophisticated language models (LMs) continues to grow, the necessity to
deploy them efficiently across federated and edge environments becomes increasingly …
deploy them efficiently across federated and edge environments becomes increasingly …