Learning from few examples: A summary of approaches to few-shot learning
Few-Shot Learning refers to the problem of learning the underlying pattern in the data just
from a few training samples. Requiring a large number of data samples, many deep learning …
from a few training samples. Requiring a large number of data samples, many deep learning …
A review of generalized zero-shot learning methods
Generalized zero-shot learning (GZSL) aims to train a model for classifying data samples
under the condition that some output classes are unknown during supervised learning. To …
under the condition that some output classes are unknown during supervised learning. To …
Is ChatGPT a general-purpose natural language processing task solver?
Spurred by advancements in scale, large language models (LLMs) have demonstrated the
ability to perform a variety of natural language processing (NLP) tasks zero-shot--ie, without …
ability to perform a variety of natural language processing (NLP) tasks zero-shot--ie, without …
Expanding language-image pretrained models for general video recognition
Contrastive language-image pretraining has shown great success in learning visual-textual
joint representation from web-scale data, demonstrating remarkable “zero-shot” …
joint representation from web-scale data, demonstrating remarkable “zero-shot” …
Pointclip: Point cloud understanding by clip
Recently, zero-shot and few-shot learning via Contrastive Vision-Language Pre-training
(CLIP) have shown inspirational performance on 2D visual recognition, which learns to …
(CLIP) have shown inspirational performance on 2D visual recognition, which learns to …
Fine-tuned clip models are efficient video learners
Large-scale multi-modal training with image-text pairs imparts strong generalization to CLIP
model. Since training on a similar scale for videos is infeasible, recent approaches focus on …
model. Since training on a similar scale for videos is infeasible, recent approaches focus on …
[HTML][HTML] Combined scaling for zero-shot transfer learning
Recent developments in multimodal training methodologies, including CLIP and ALIGN,
obviate the necessity for individual data labeling. These approaches utilize pairs of data and …
obviate the necessity for individual data labeling. These approaches utilize pairs of data and …
Vita-clip: Video and text adaptive clip via multimodal prompting
Adopting contrastive image-text pretrained models like CLIP towards video classification has
gained attention due to its cost-effectiveness and competitive performance. However, recent …
gained attention due to its cost-effectiveness and competitive performance. However, recent …
Attribute prototype network for zero-shot learning
From the beginning of zero-shot learning research, visual attributes have been shown to
play an important role. In order to better transfer attribute-based knowledge from known to …
play an important role. In order to better transfer attribute-based knowledge from known to …
A survey of zero-shot learning: Settings, methods, and applications
Most machine-learning methods focus on classifying instances whose classes have already
been seen in training. In practice, many applications require classifying instances whose …
been seen in training. In practice, many applications require classifying instances whose …