Reusing pretrained models by multi-linear operators for efficient training

Y Pan, Y Yuan, Y Yin, Z Xu, L Shang… - Advances in Neural …, 2023 - proceedings.neurips.cc
Training large models from scratch usually costs a substantial amount of resources. Towards
this problem, recent studies such as bert2BERT and LiGO have reused small pretrained …

Federated full-parameter tuning of billion-sized language models with communication cost under 18 kilobytes

Z Qin, D Chen, B Qian, B Ding, Y Li, S Deng - arXiv preprint arXiv …, 2023 - arxiv.org
Pre-trained large language models (LLMs) require fine-tuning to improve their
responsiveness to natural language instructions. Federated learning (FL) offers a way to …

Grounding foundation models through federated transfer learning: A general framework

Y Kang, T Fan, H Gu, L Fan, Q Yang - arXiv preprint arXiv:2311.17431, 2023 - arxiv.org
Foundation Models (FMs) such as GPT-4 encoded with vast knowledge and powerful
emergent abilities have achieved remarkable success in various natural language …

Mm-soc: Benchmarking multimodal large language models in social media platforms

Y Jin, M Choi, G Verma, J Wang, S Kumar - arXiv preprint arXiv …, 2024 - arxiv.org
Social media platforms are hubs for multimodal information exchange, encompassing text,
images, and videos, making it challenging for machines to comprehend the information or …

Federated fine-tuning of llms on the very edge: The good, the bad, the ugly

H Woisetschläger, A Erben, S Wang, R Mayer… - Proceedings of the …, 2024 - dl.acm.org
With the emergence of AI regulations, such as the EU AI Act, requirements for simple data
lineage, enforcement of low data bias, and energy efficiency have become a priority for …

Text Embedding Inversion Security for Multilingual Language Models

Y Chen, H Lent, J Bjerva - … of the 62nd Annual Meeting of the …, 2024 - aclanthology.org
Textual data is often represented as real-numbered embeddings in NLP, particularly with the
popularity of large language models (LLMs) and Embeddings as a Service (EaaS) …

Analysis of Privacy Leakage in Federated Large Language Models

M Vu, T Nguyen, MT Thai - International Conference on …, 2024 - proceedings.mlr.press
With the rapid adoption of Federated Learning (FL) as the training and tuning protocol for
applications utilizing Large Language Models (LLMs), recent research highlights the need …

On Disentanglement of Asymmetrical Knowledge Transfer for Modality-Task Agnostic Federated Learning

J Chen, A Zhang - Proceedings of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
There has been growing concern regarding data privacy during the development and
deployment of Multimodal Foundation Models for Artificial General Intelligence (AGI), while …

A survey on efficient federated learning methods for foundation model training

H Woisetschläger, A Isenko, S Wang, R Mayer… - arXiv preprint arXiv …, 2024 - arxiv.org
Federated Learning (FL) has become an established technique to facilitate privacy-
preserving collaborative training. However, new approaches to FL often discuss their …

Federated Large Language Models: Current Progress and Future Directions

Y Yao, J Zhang, J Wu, C Huang, Y Xia, T Yu… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models are rapidly gaining popularity and have been widely adopted in real-
world applications. While the quality of training data is essential, privacy concerns arise …