Beimingwu: A learnware dock system
The learnware paradigm proposed by Zhou (2016) aims to enable users to leverage
numerous existing high-performing models instead of building machine learning models …
numerous existing high-performing models instead of building machine learning models …
Selecting large language model to fine-tune via rectified scaling law
The ever-growing ecosystem of LLMs has posed a challenge in selecting the most
appropriate pre-trained model to fine-tune amidst a sea of options. Given constrained …
appropriate pre-trained model to fine-tune amidst a sea of options. Given constrained …
Foundation model is efficient multimodal multitask model selector
This paper investigates an under-explored but important problem: given a collection of pre-
trained neural networks, predicting their performance on each multi-modal task without fine …
trained neural networks, predicting their performance on each multi-modal task without fine …
Foundation Model is Efficient Multimodal Multitask Model Selector
This paper investigates an under-explored but important problem: given a collection of pre-
trained neural networks, predicting their performance on each multi-modal task without fine …
trained neural networks, predicting their performance on each multi-modal task without fine …
Parrot: Multilingual Visual Instruction Tuning
The rapid development of Multimodal Large Language Models (MLLMs) like GPT-4V has
marked a significant step towards artificial general intelligence. Existing methods mainly …
marked a significant step towards artificial general intelligence. Existing methods mainly …
One size does not fit all in evaluating model selection scores for image classification
N Abou Baker, U Handmann - Scientific Reports, 2024 - nature.com
Selecting pretrained models for image classification often involves computationally intensive
finetuning. This study addresses a research gap in the standardized evaluation of …
finetuning. This study addresses a research gap in the standardized evaluation of …
Universal embedding for pre-trained models and data bench
The transformer architecture has shown significant improvements in the performance of
various natural language processing (NLP) tasks. One of the great advantages of …
various natural language processing (NLP) tasks. One of the great advantages of …
Wings: Learning Multimodal LLMs without Text-only Forgetting
Multimodal large language models (MLLMs), initiated with a trained LLM, first align images
with text and then fine-tune on multimodal mixed inputs. However, the MLLM …
with text and then fine-tune on multimodal mixed inputs. However, the MLLM …
OmniEvalKit: A Modular, Lightweight Toolbox for Evaluating Large Language Model and its Omni-Extensions
The rapid advancements in Large Language Models (LLMs) have significantly expanded
their applications, ranging from multilingual support to domain-specific tasks and multimodal …
their applications, ranging from multilingual support to domain-specific tasks and multimodal …
Improving LLMs for Recommendation with Out-Of-Vocabulary Tokens
Characterizing users and items through vector representations is crucial for various tasks in
recommender systems. Recent approaches attempt to apply Large Language Models …
recommender systems. Recent approaches attempt to apply Large Language Models …