Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment
With the continuous growth in the number of parameters of transformer-based pretrained
language models (PLMs), particularly the emergence of large language models (LLMs) with …
language models (PLMs), particularly the emergence of large language models (LLMs) with …
End-edge-cloud collaborative computing for deep learning: A comprehensive survey
The booming development of deep learning applications and services heavily relies on
large deep learning models and massive data in the cloud. However, cloud-based deep …
large deep learning models and massive data in the cloud. However, cloud-based deep …
Ma-sam: Modality-agnostic sam adaptation for 3d medical image segmentation
Abstract The Segment Anything Model (SAM), a foundation model for general image
segmentation, has demonstrated impressive zero-shot performance across numerous …
segmentation, has demonstrated impressive zero-shot performance across numerous …
Adapting language models to compress contexts
Transformer-based language models (LMs) are powerful and widely-applicable tools, but
their usefulness is constrained by a finite context window and the expensive computational …
their usefulness is constrained by a finite context window and the expensive computational …
Biomedgpt: A unified and generalist biomedical generative pre-trained transformer for vision, language, and multimodal tasks
Conventional task-and modality-specific artificial intelligence (AI) models are inflexible in
real-world deployment and maintenance for biomedicine. At the same time, the growing …
real-world deployment and maintenance for biomedicine. At the same time, the growing …
Llm4ts: Two-stage fine-tuning for time-series forecasting with pre-trained llms
In this work, we leverage pre-trained Large Language Models (LLMs) to enhance time-
series forecasting. Mirroring the growing interest in unifying models for Natural Language …
series forecasting. Mirroring the growing interest in unifying models for Natural Language …
Mechanistically analyzing the effects of fine-tuning on procedurally defined tasks
Fine-tuning large pre-trained models has become the de facto strategy for developing both
task-specific and general-purpose machine learning systems, including developing models …
task-specific and general-purpose machine learning systems, including developing models …
Adapters: A unified library for parameter-efficient and modular transfer learning
We introduce Adapters, an open-source library that unifies parameter-efficient and modular
transfer learning in large language models. By integrating 10 diverse adapter methods into a …
transfer learning in large language models. By integrating 10 diverse adapter methods into a …
Llara: Aligning large language models with sequential recommenders
Sequential recommendation aims to predict the subsequent items matching user preference
based on her/his historical interactions. With the development of Large Language Models …
based on her/his historical interactions. With the development of Large Language Models …