Learning from few examples: A summary of approaches to few-shot learning

A Parnami, M Lee - arXiv preprint arXiv:2203.04291, 2022 - arxiv.org
Few-Shot Learning refers to the problem of learning the underlying pattern in the data just
from a few training samples. Requiring a large number of data samples, many deep learning …

A review of generalized zero-shot learning methods

F Pourpanah, M Abdar, Y Luo, X Zhou… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
Generalized zero-shot learning (GZSL) aims to train a model for classifying data samples
under the condition that some output classes are unknown during supervised learning. To …

Is ChatGPT a general-purpose natural language processing task solver?

C Qin, A Zhang, Z Zhang, J Chen, M Yasunaga… - arXiv preprint arXiv …, 2023 - arxiv.org
Spurred by advancements in scale, large language models (LLMs) have demonstrated the
ability to perform a variety of natural language processing (NLP) tasks zero-shot--ie, without …

Expanding language-image pretrained models for general video recognition

B Ni, H Peng, M Chen, S Zhang, G Meng, J Fu… - … on Computer Vision, 2022 - Springer
Contrastive language-image pretraining has shown great success in learning visual-textual
joint representation from web-scale data, demonstrating remarkable “zero-shot” …

Pointclip: Point cloud understanding by clip

R Zhang, Z Guo, W Zhang, K Li… - Proceedings of the …, 2022 - openaccess.thecvf.com
Recently, zero-shot and few-shot learning via Contrastive Vision-Language Pre-training
(CLIP) have shown inspirational performance on 2D visual recognition, which learns to …

Fine-tuned clip models are efficient video learners

H Rasheed, MU Khattak, M Maaz… - Proceedings of the …, 2023 - openaccess.thecvf.com
Large-scale multi-modal training with image-text pairs imparts strong generalization to CLIP
model. Since training on a similar scale for videos is infeasible, recent approaches focus on …

[HTML][HTML] Combined scaling for zero-shot transfer learning

H Pham, Z Dai, G Ghiasi, K Kawaguchi, H Liu, AW Yu… - Neurocomputing, 2023 - Elsevier
Recent developments in multimodal training methodologies, including CLIP and ALIGN,
obviate the necessity for individual data labeling. These approaches utilize pairs of data and …

Vita-clip: Video and text adaptive clip via multimodal prompting

ST Wasim, M Naseer, S Khan… - Proceedings of the …, 2023 - openaccess.thecvf.com
Adopting contrastive image-text pretrained models like CLIP towards video classification has
gained attention due to its cost-effectiveness and competitive performance. However, recent …

Attribute prototype network for zero-shot learning

W Xu, Y Xian, J Wang, B Schiele… - Advances in Neural …, 2020 - proceedings.neurips.cc
From the beginning of zero-shot learning research, visual attributes have been shown to
play an important role. In order to better transfer attribute-based knowledge from known to …

A survey of zero-shot learning: Settings, methods, and applications

W Wang, VW Zheng, H Yu, C Miao - ACM Transactions on Intelligent …, 2019 - dl.acm.org
Most machine-learning methods focus on classifying instances whose classes have already
been seen in training. In practice, many applications require classifying instances whose …