Reusing pretrained models by multi-linear operators for efficient training
Training large models from scratch usually costs a substantial amount of resources. Towards
this problem, recent studies such as bert2BERT and LiGO have reused small pretrained …
this problem, recent studies such as bert2BERT and LiGO have reused small pretrained …
Federated full-parameter tuning of billion-sized language models with communication cost under 18 kilobytes
Pre-trained large language models (LLMs) require fine-tuning to improve their
responsiveness to natural language instructions. Federated learning (FL) offers a way to …
responsiveness to natural language instructions. Federated learning (FL) offers a way to …
Grounding foundation models through federated transfer learning: A general framework
Foundation Models (FMs) such as GPT-4 encoded with vast knowledge and powerful
emergent abilities have achieved remarkable success in various natural language …
emergent abilities have achieved remarkable success in various natural language …
Mm-soc: Benchmarking multimodal large language models in social media platforms
Social media platforms are hubs for multimodal information exchange, encompassing text,
images, and videos, making it challenging for machines to comprehend the information or …
images, and videos, making it challenging for machines to comprehend the information or …
Federated fine-tuning of llms on the very edge: The good, the bad, the ugly
With the emergence of AI regulations, such as the EU AI Act, requirements for simple data
lineage, enforcement of low data bias, and energy efficiency have become a priority for …
lineage, enforcement of low data bias, and energy efficiency have become a priority for …
Text Embedding Inversion Security for Multilingual Language Models
Textual data is often represented as real-numbered embeddings in NLP, particularly with the
popularity of large language models (LLMs) and Embeddings as a Service (EaaS) …
popularity of large language models (LLMs) and Embeddings as a Service (EaaS) …
Analysis of Privacy Leakage in Federated Large Language Models
With the rapid adoption of Federated Learning (FL) as the training and tuning protocol for
applications utilizing Large Language Models (LLMs), recent research highlights the need …
applications utilizing Large Language Models (LLMs), recent research highlights the need …
On Disentanglement of Asymmetrical Knowledge Transfer for Modality-Task Agnostic Federated Learning
There has been growing concern regarding data privacy during the development and
deployment of Multimodal Foundation Models for Artificial General Intelligence (AGI), while …
deployment of Multimodal Foundation Models for Artificial General Intelligence (AGI), while …
A survey on efficient federated learning methods for foundation model training
Federated Learning (FL) has become an established technique to facilitate privacy-
preserving collaborative training. However, new approaches to FL often discuss their …
preserving collaborative training. However, new approaches to FL often discuss their …
Federated Large Language Models: Current Progress and Future Directions
Large language models are rapidly gaining popularity and have been widely adopted in real-
world applications. While the quality of training data is essential, privacy concerns arise …
world applications. While the quality of training data is essential, privacy concerns arise …