Beyond sole strength: Customized ensembles for generalized vision-language models
Fine-tuning pre-trained vision-language models (VLMs), eg, CLIP, for the open-world
generalization has gained increasing popularity due to its practical value. However,
performance advancements are limited when relying solely on intricate algorithmic designs
for a single model, even one exhibiting strong performance, eg, CLIP-ViT-B/16. This paper,
for the first time, explores the collaborative potential of leveraging much weaker VLMs to
enhance the generalization of a robust single model. The affirmative findings motivate us to …
generalization has gained increasing popularity due to its practical value. However,
performance advancements are limited when relying solely on intricate algorithmic designs
for a single model, even one exhibiting strong performance, eg, CLIP-ViT-B/16. This paper,
for the first time, explores the collaborative potential of leveraging much weaker VLMs to
enhance the generalization of a robust single model. The affirmative findings motivate us to …
Fine-tuning pre-trained vision-language models (VLMs), e.g., CLIP, for the open-world generalization has gained increasing popularity due to its practical value. However, performance advancements are limited when relying solely on intricate algorithmic designs for a single model, even one exhibiting strong performance, e.g., CLIP-ViT-B/16. This paper, for the first time, explores the collaborative potential of leveraging much weaker VLMs to enhance the generalization of a robust single model. The affirmative findings motivate us to address the generalization problem from a novel perspective, i.e., ensemble of pre-trained VLMs. We introduce three customized ensemble strategies, each tailored to one specific scenario. Firstly, we introduce the zero-shot ensemble, automatically adjusting the logits of different models based on their confidence when only pre-trained VLMs are available. Furthermore, for scenarios with extra few-shot samples, we propose the training-free and tuning ensemble, offering flexibility based on the availability of computing resources. The proposed ensemble strategies are evaluated on zero-shot, base-to-new, and cross-dataset generalization, achieving new state-of-the-art performance. Notably, this work represents an initial stride toward enhancing the generalization performance of VLMs via ensemble. The code is available at https://github.com/zhiheLu/Ensemble_VLM.git.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果