Stylip: Multi-scale style-conditioned prompt learning for clip-based domain generalization

S Bose, A Jha, E Fini, M Singha… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract arge-scale foundation models, such as CLIP, have demonstrated impressive zero-
shot generalization performance on downstream tasks, leveraging well-designed language …

Continual zero-shot learning through semantically guided generative random walks

W Zhang, P Janson, K Yi… - Proceedings of the …, 2023 - openaccess.thecvf.com
Learning novel concepts, remembering previous knowledge, and adapting it to future tasks
occur simultaneously throughout a human's lifetime. To model such comprehensive abilities …

SEIC: Semantic Embedding with Intermediate Classes for Zero-Shot Domain Generalization

B Mondal, S Biswas - … of the Asian Conference on Computer …, 2022 - openaccess.thecvf.com
In this work, we address the Zero-Shot Domain Generalization (ZSDG) task, where the goal
is to learn a model from multiple source domains, such that it can generalize well to both …

[PDF][PDF] Handling Class-Imbalance for Improved Zero-Shot Domain Generalization.

A Arfeen, T Dutta, S Biswas - BMVC, 2022 - bmvc2022.mpi-inf.mpg.de
Zero-shot domain generalization (ZSDG) simultaneously addresses the challenges of
dissimilar distribution and disjoint label-spaces of the training and test data in the context of …

Less but Better: Enabling Generalized Zero-shot Learning Towards Unseen Domains by Intrinsic Learning from Redundant LLM Semantics

J Yue, J Zhao, C Zhao - arXiv preprint arXiv:2403.14362, 2024 - arxiv.org
Generalized zero-shot learning (GZSL) focuses on recognizing seen and unseen classes
against domain shift problem (DSP) where data of unseen classes may be misclassified as …

INDIGO: Intrinsic Multimodality for Domain Generalization

P Mangla, S Chandhok, M Aggarwal… - arXiv preprint arXiv …, 2022 - arxiv.org
For models to generalize under unseen domains (aka domain generalization), it is crucial to
learn feature representations that are domain-agnostic and capture the underlying …

Prompt Tuning Is All We Need?

H Yu, H Zheng, Y Zhang, S Xie, X Cao, Z Fang - openreview.net
Recent advances in pre-trained vision-language models, eg, CLIP, have demonstrated
remarkable success in domain generalization (DG) by tuning prompts. To promote DG, one …